an autodidact meets a dilettante…

‘Rise above yourself and grasp the world’ Archimedes – attribution

Posts Tagged ‘mathematics

Bayesian probability, sans maths (mostly)

leave a comment »

Bayesian stuff – it gets more complicated, apparently

Okay time to get back to sciency stuff, to try to get my head around things I should know more about. Bayesian statistics and probability have been brought to the periphery of my attention many times over the years, but my current slow reading of Daniel Kahneman’s Thinking fast and slow has challenged me to master it once and for all (and then doubtless to forget about it forevermore).

I’ve started a couple of pieces on this topic in the past week or so, and abandoned them along with all hope of making sense of what is no doubt a doddle for the cognoscenti, so I clearly need to keep it simple for my own sake. The reason I’m interested is because critics and analysts of both scientific research and political policy-making often complain that Bayesian reasoning is insufficiently utilised, to the detriment of such activities. I can’t pretend that I’ll be able to help out though!

So Thomas Bayes was an 18th century English statistician who left a theorem behind in his unpublished papers, apparently underestimating its significance. The person most responsible for utilising and popularising Bayes’ work was the French polymath Pierre-Simon Laplace. The theorem, or rule, is captured mathematically thusly:

{\displaystyle P(A\mid B)={\frac {P(B\mid A)P(A)}{P(B)}}}

where A and B are events, and P(B), that is, the probability of event B, is not equal to zero. In statistics, the probability of an event’s occurrence ranges from 0 to 1 – meaning zero probability to total certainty.

I do, at least, understand the above equation, which, wordwise, means that the probability of A occurring, given that B has occurred, is equal to the probability of B occurring, given that A has occurred, multiplied by the probability of A’s occurrence, all divided by the probability of B’s occurrence. However, after tackling a few video mini-lectures on the topic I’ve decided to give up and focus on Kahneman’s largely non-mathematical treatment with regard to decision-making. The theorem, or rule, presents, as Kahneman puts it, ‘the logic of how people should change their mind in the light of evidence’. Here’s how Kahneman first describes it:

Bayes’ rule specifies how prior beliefs… should be combined with the diagnosticity of the evidence, the degree to which it favours the hypothesis over the alternative.

D Kahneman, Thinking fast and slow, p154

In the most simple example – if you believe that there’s a 65% chance of rain tomorrow, you really need to believe that there’s a 35% chance of no rain tomorrow, rather than any alternative figure. That seems logical enough, but take this example re US Presidential elections:

… if you believe there’s a 30% chance that candidate x will be elected President, and an 80% chance that he’ll be re-elected if he wins first time, then you must believe that the chances that he will be elected twice in a row are 24%.

This is also logical, but not obvious to a surprisingly large percentage of people. What appears to ‘throw’ people is a story, a causal narrative. They imagine a candidate winning, somewhat against the odds, then proving her worth in office and winning easily next time round – this story deceives them into defying logic and imagining that the chance of her winning twice in a row is greater than that of winning first time around – which is a logical impossibility. Kahneman places this kind of irrationalism within the frame of system 1 v system 2 thinking – roughly equivalent to intuition v concentrated reasoning. His solution to the problem of this kind of suasion-by-story is to step back and take greater stock of the ‘diagnosticity’ of what you already know, or what you have predicted, and how it affects any further related predictions. We’re apparently very bad at this.

There are many examples throughout the book of failure to reason effectively from information about base rates, often described as ‘base-rate neglect’. A base rate is a statistical fact which should be taken into account when considering a further probability. For example, when given information about the character of a a fictional person T, information that was deliberately designed to suggest he was stereotypical of a librarian, research participants gave the person a much higher probability of being a librarian rather than a farmer, even though they knew, or should have known, that the number of persons employed as farmers was higher by a large factor than those employed as librarians (the base rate of librarians in the workforce). Of course the degree to which the base rate was made salient to participants affected their predictions.

Here’s a delicious example of the application, or failure to apply, Bayes’ rule:

A cab was involved in a hit-and-run at night. Two cab companies, Green Cabs and Blue Cabs, operate in the city. You’re given the following data:

– 85% of the cabs in the city are Green, 15% are Blue.

– A witness identified the cab as Blue. The court tested the reliability of the witness under the circumstances that existed on the night of the accident and concluded that the witness correctly identified each one of the two colours 80% of the time and failed 20% of the time.

What is the probability that the car involved in the accident was Blue rather than Green?

D Kahneman, Thinking fast and slow, p166

It’s an artificial scenario, granted, but if we accept the accuracy of those probabilities, we can say this: given that the base rate of Blue cars is 15%, and the probability of the witness identifying the car accurately is 80%, we have this figure for the dividend – (.15/.85) x (.8/.2) =.706. Dividing this by the range of probabilities plus the dividend (1.706) gives approximately 41%.

So how close were the research participants to this figure? Most participants ignored the statistical data – the base rates – and gave the figure of 80%. They were more convinced by the witness. However, when the problem was framed differently, by providing causal rather than statistical data, participants’ guesses were more accurate. Here’s the alternative presentation of the scenario:

You’re given the following data:

– the two companies operate the same number of cabs, but Green cabs are involved in 85% of accidents

– the information about the witness is the same as previously presented

The mathematical result is the same, but this time the guesses were much closer to the correct figure. The difference lay in the framing. Green cabs cause accidents. That was the fact that jumped out, whereas in the first scenario, the fact that most clearly jumped out was that the witness identified the offending car as Blue. The statistical data in scenario 1 was largely ignored. In the second scenario, the witness’s identification of the Blue car moderated the tendency to blame the Green cars, whereas in scenario 1 there was no ‘story’ about Green cars causing accidents and the blame shifted almost entirely to the Blue cars, based on the witness’s story. Kahneman named his chapter about this tendency ‘Causes trump statistics’.

So there are causal and statistical base rates, and the lesson is that in much of our intuitive understanding of probability, we simply pay far more attention to causal base rates, largely to our detriment. Also, our causal inferences tend to be stereotyped, so that only if we are faced with surprising causal rates, in particular cases and not presented statistically, are we liable to adjust our probabilistic assessments. Kahneman presents some striking illustrations of this in the research literature. Causal information creates bias in other areas of behaviour assessment too, of course, as in the phenomenon of regression to the mean, but that’s for another day, perhaps.

Written by stewart henderson

August 27, 2019 at 2:52 pm

on electrickery, part 2 – the beginnings

leave a comment »

William Gilbert, author of De Magnete, 1600

Canto: So let’s now start at the beginning. What we now call electricity, or even electromagnetism, has been observed and questioned since antiquity. People would’ve wondered about lightning and electrostatic shocks and so forth.

Jacinta: And by an electrostatic shock, you mean the sort we get sometimes when we touch a metal door handle? How does that work, and why do we call it electrostatic?

Canto: Well we could do a whole post on static electricity, and maybe we should, but it happens when electrons – excess electrons if you like – move from your hand to the conductive metal. This is a kind of electrical discharge. For it to have happened you need to have built up electric charge in your body. Static electricity is charge that builds up through contact with clothing, carpet etc. It’s called static because it has nowhere to go unless it comes into contact with a positive conductor.

Jacinta: Yes and it’s more common on dry days, because water molecules in the atmosphere help to dissipate electrons, reducing the charge in your body.

Canto: So the action of your shoes when walking on carpet – and rubber soles are worst for this – creates a transfer of electrons, as does rubbing a plastic rod with wooden cloth. In fact amber, a plastic-like tree resin, was called ‘elektron’ in ancient Greek. It was noticed in those days that jewellery made from amber often stuck to clothing, like a magnet, causing much wonderment no doubt.

Jacinta: But there’s this idea of ‘earthing’, can you explain that?

Canto: It’s not an idea, it’s a thing. It’s also called grounding, though probably earthing is better because it refers to the physical/electrical properties of the Earth. I can’t go into too much detail on this, its complexity is way above my head, but generally earthing an electrical current means dissipating it for safety purposes – though the Earth can also be used as an electrical conductor, if a rather unreliable one. I won’t go any further as I’m sure to get it wrong if I haven’t already.

Jacinta: Okay, so looking at the ‘modern’ history of our understanding of electricity and magnetism, Elizabethan England might be a good place to start. In the 1570s mathematically minded seamen and navigators such as William Borough and Robert Norman were noting certain magnetic properties of the Earth, and Norman worked out a way of measuring magnetic inclination in 1581. That’s the angle made with the horizon, which can be positive or negative depending on position. It all has to do with the Earth’s magnetic field lines, which don’t run parallel to the surface. Norman’s work was a major inspiration for William Gilbert, physician to Elizabeth I and a tireless experimenter, who published De Magnete (On the Magnet – the short title) in 1600. He rightly concluded that the Earth was itself a magnet, and correctly proposed that it had an iron core. He was the first to use the term ‘electric force’, through studying the electrostatic properties of amber.

Canto: Yes, Gilbert’s work was a milestone in modern physics, greatly influencing Kepler and Galileo. He collected under one head just about everything that was known about magnetism at the time, though he considered it a separate phenomenon from electricity. Easier for me to talk in these historical terms than in physics terms, where I get lost in the complexities within a few sentences.

Jacinta: I know the feeling, but here’s a relatively simple explanation of earthing/grounding from a ‘physics stack exchange’ which I hope is accurate:

Grounding a charged rod means neutralizing that rod. If the rod contains excess positive charge, once grounded the electrons from the ground neutralize the positive charge on the rod. If the rod is having an excess of negative charge, the excess charge flows to the ground. So the ground behaves like an infinite reservoir of electrons.

So the ground’s a sink for electrons but also a source of them.

Canto: Okay, so if we go the historical route we should mention a Chinese savant of the 11th century, Shen Kuo, who wrote about magnetism, compasses and navigation. Chinese navigators were regularly using the lodestone in the 12th century. But moving into the European renaissance, the great mathematician and polymath Gerolamo Cardano can’t be passed by. He was one of the era’s true originals, and he wrote about electricity and magnetism in the mid-16th century, describing them as separate entities.

Jacinta: But William Gilbert’s experiments advanced our knowledge much further. He found that heat and moisture negatively affected the ‘electrification’ of materials, of which there were many besides amber. Still, progress in this era, when idle curiosity was frowned upon, was slow, and nothing much else happened in the field until the work of Otto von Guericke and Robert Boyle in the mid-17th century. They were both interested particularly in the properties, electrical and otherwise, of vacuums.

Canto: But the electrical properties of vacuum tubes weren’t really explored until well into the 18th century. Certain practical developments had occurred though. The ‘electrostatic machine’ was first developed, in primitive form, by von Guericke, and improved throughout the 17th and 18th centuries, but they were often seen as little more than a sparky curiosity. There were some theoretical postulations about electrics and non-electrics, including a duel-fluid theory, all of which anticipated the concept of conductors and insulators. Breakthroughs occurred in the 1740s with the invention of the Leyden Jar, and with experiments in electrical signalling. For example, an ingenious experiment of 1746, conducted by Jean-Antoine Nollet, which connected 200 monks by wires to form a 1.6 kilometre circle, showed that the speed of electrical transmission was very high! Experiments in ‘electrotherapy’ were also carried out on plants, with mixed results.

Jacinta: And in the US, from around this time, Benjamin Franklin carried out his experiments with lightning and kites, and he’s generally credited with the idea of positive to negative electrical flow, though theories of what electricity actually is remained vague. But it seems that Franklin’s fame provided impetus to the field. Franklin’s experiments connected lightning and electricity once and for all, though similar work, both experimental and theoretical, was being conducted in France, England and elsewhere.

Canto: Yes, there’s a giant roll-call of eighteenth century researchers and investigators – among them Luigi Galvani, Jean Jallabert, John Canton, Ebenezer Kinnersley, Giovanni Beccaria, Joseph Priestley, Mathias Bose, Franz Aepinus, Henry Cavendish, Charles-Augustin Coulomb and Alessandro Volta, who progressed our understanding of electrical and magnetic phenomena, so that modern concepts like electric potential, charge, capacitance, current and the like, were being formalised by the end of that century.

Jacinta: Yes, for example Coulomb discovered, or published, a very important inverse-square law in 1784, which I don’t have the wherewithal to put here mathematically, but it states that:

The magnitude of the electrostatic force of attraction between two point charges is directly proportional to the product of the magnitudes of charges and inversely proportional to the square of the distance between them.

This law was an essential first step in the theory of electromagnetism, and it was anticipated by other researchers, including Priestley, Aepinus and Cavendish.

get it?

Canto: And Volta produced the first electric battery, which he demonstrated before Napoleon at the beginning of the 19th century.

Jacinta: And of course this led to further experimentation – almost impossible to trace the different pathways and directions opened up. In England, Humphrey Davy and later Faraday conducted experiments in electrochemistry, and Davy invented the first form of electric light in 1809. Scientists, mathematicians, experimenters and inventors of the early nineteenth century who made valuable contributions include Hans Christian Orsted, Andre-Marie Ampere, Georg Simon Ohm and Joseph Henry, though there were many others. Probably the most important experimenter of the period, in both electricity and magnetism, was Michael Faraday, though his knowledge of mathematics was very limited. It was James Clerk Maxwell, one of the century’s most gifted mathematicians, who was able to use Faraday’s findings into mathematical equations, and more importantly, to conceive of the relationship between electricity, magnetism and light in a profoundly different way, to some extent anticipating the work of Einstein.

Canto: And we should leave it there, because we really hardly know what we’re talking about.

Jacinta: Too right – my reading up on this stuff brings my own ignorance to mind with the force of a very large electrostatic discharge….

now try these..

Written by stewart henderson

October 22, 2017 at 10:09 am

the reveries of a solitary wa*ker: wa*k 4 (universal matters)

leave a comment »

1Universe.jpg

could someone be spreading BS over the internet?

The universe is more turbulent than we imagined. It’s a quantum computer. It’s nothing but information. Where’s all the lithium? Is it really spinning, and are we anywhere near the axis? What was in the beginning? Pure energy? What does that mean? Energy without particles? The energy coalesced into particles, so I’ve read. Sounds a bit miraculous to me. The fundamental particles being quarks and electrons. Leptons? But quarks aren’t leptons, they’re fermions but leptons are also fermions but these are but names. Quarks came together in triplets via a strong force, but from whence this force? Something to do with electromagnetism, but that’s just a name. I’m guessing that physicists don’t know how these forces and particles emerged, they can only deduce and describe them mathematically. Quarks and leptons are elementary fermions, that’s to say particles with half-integer spin, according to the spin-statistics theorem. Only one fermion can occupy a particular quantumstate at one time, that’s according to the Pauli exclusion principle. Fermions include more than just quarks and leptons (electrons and neutrinos), they can be composite particles made up of an odd number of quarks and leptons, hence baryons made up of quark triplets. Fermions are often opposed to bosons in the sense that they’re associated with particles (matter) but bosons are more associated with force, but the intimate relation between matter and energy blurs this distinction. Anyway this strong force pulled quarks together to form protons and neutrons, while an electromagnetic force pulled together protons and electrons and voila, hydrogen atoms. All this in the turbulent immediate post-bang time. Hydrogen fused with hydrogen to form helium and so on all the way up to lithium, but that’s not far up because lithium comes after helium in the periodic table. The amount of hydrogen and helium in the universe fits precisely big bang expectations, and in fact is bestevidence for that theory but where’s all the lithium? There’s only a third as much lithium isotope 7 (with four neutrons) as there should be, but that’s okay cause there’s a superabundance of lithium-6. No, not okay. Some argue that it’s a big problem for the big bang theory, others not, surprise surprise. The period of creation of hydrogen and helium is called the primordial nucleosynthesis period, and it covers the time from a few seconds to 20 minutes or so after the bang. More precisely, the heavier isotopes of hydrogen, as well as helium and some lithium and beryllium, the next one in complexity, were created then and everything else was created much later, in stellar evolution and dissolution. Obviously the big bang released a serious amount of energy, and then things quickly cooled, permitting somehow the creation of elementary leptons such as electrons and electron neutrinos. During these first instances there was also a huge degree of inflation. The earliest instants of theuniverse are referred to as the Planck epoch, and it’s fair to say that what we know for certain about that minuscule epoch is equally minuscule, but it’s believed that the different fundamental forces posited today were then unified, and gravitation, the weakest of those forces in the present universe, was then much stronger, and maybe subject to quantum effects, which is interesting because though I know little of all this stuff largely due to mathematical ignorance, and of course inattention, I do know that gravity and the quantum world have proved irreconcilable since first theorised. Needless to say the Planck epoch is very different from ours, and it’s at this scale that quantum gravitational effects may be realised. We can’t test this though even with our best particle accelerators. It’s one for the future. Meanwhile, the renormalisation problem. Well actually renormalisation began as a provisional solution to the problem of infinities.

We describe space-time as a continuum. So there are three dimensions of space, what we call Euclidean space, and a dimension of time. But how does that actually work? Perhaps not very well. I’m talking about a classic-mechanical picture, but in relativistic contexts time is enmeshed with space and velocity and gravity. Cosmologists combine the lot into a single manifold called a Minkowski space. All I know of this is that it involves an independent notion of spacetime intervals and is mathematically more complicated than I can begin to comprehend, though supposedly it’s a relatively simple special case of a Lorentzian manifold, which itself is a special case of a pseudo-Riemannian manifold. I’m engaging in mathematics, not humour. Or vice versa. All this is beside the point, it’s just that trying to reconcile quantum theory and relativity is impossible without the creation of infinities, and infinities are much disliked by many cosmologists, being far too messy, and time is out of fashion too, the quantum world simply ignores it. And we still don’t know what happened to the lithium.

Mathematics has so far been absolutely central to our understanding of the universe. So is the universe or multiverse no more than a mathematical construct? If it is, it’s one that we’ve not yet figured out, and it’s unlikely that we ever will, it just gets more complicated as we develop more sophisticated tools to examine it. I’ve always suspected that the universe/multiverse is as complex as we are capable, with our increasingly ‘precise’ tools and increasingly sophisticated maths, of making it, and so will continue to get more complex, but that’s a sort of sacrilegious solipsism, isn’t it? The universe as increasingly complex projection of an increasingly complex collective consciousness? Is that what they mean when they say it’s a hologram? Probably not.

One more point about infinity. Max Tegmark says that the idea of a finite universe never made sense to him. How could the universe have a boundary, and if so, what’s on the other side? Another way of thinking about this is, if the big bang involved an explosion or, more accurately, a massive, near-instantaneous expansion, what did it expand into? Did this expansion involve a contraction on the other side of the boundary? It’s said that space-time began with the big bang, so there’s no outside. How can we really know that though? Of course if you believe that absolutely everything began with the big bang, then you’ll believe in a finite universe, as the bang began with a particular mass-energy point-bundle, which would have to be finite, and could not be added to or subtracted from, according to what I know about conservation laws. Anyway, enough of all this paddling in the shallows. It’s funny, though, I’ve recently encountered people who are extremely reluctant to talk about such matters, even in my shallow way. They actually suffer from ‘cosmological fear’ (my invention). Something to do with existential lostness, and mortality.

Written by stewart henderson

August 1, 2015 at 9:58 am

how to debate William Lane Craig, or not – part one, in which WLC presents his case

leave a comment »

reasonablefaith_primages

Some years ago I did a wee post on William Lane Craig – to the effect that he was a pushover, more or less. Yet Craig keeps on debating, and claiming ‘victims’. It depends on who you speak to or read, but there’s no doubt that Craig has appeared to come off best in most of the innumerable, and same-ish debates he engages in with atheist academics and/or celebrities. There’s even a forum-type website here, which declares to the world ‘you are not qualified to debate WLC’ (unless you’ve been studying all that WLC has been studying for the last twenty-odd years). Oddly, though the writer also declares that WLC’s arguments aren’t that good, so WTF? (I just threw that in there to go with WLC).

So I’m going to prove this writer, Andrew, wrong, by debating WLC right now, and comprehensively thrashing him. I’m going to base WLC’s presentation on a recent debate he had, last month, with ‘the 13th most important atheist in the world’, Alex Rosenberg, whom I’d never heard of before listening to this debate. The debate, called, ‘Is faith in God reasonable?’ followed a format which seems to be of WLC’s devising, in which he always goes first and sets out five points, or six, or as in this case eight (it was a big event), which show why said faith is reasonable (the debate topic could be ‘does God exist?’ or variants thereof, and he could trot out the same six or eight points). He gets 20 minutes or so to do this, and finishes by saying something like – ‘these eight arguments must each be refuted for the opposition to be taken seriously’.  And so the opposition, namely myself (under my esteemed alias Luigi Funesti-Sordido, founding Secretary of the Urbane Society of Sceptical Romantics) will have twenty minutes to refute these eight points, after which there are 12 minutes each for rebuttals, and five minutes each of summing up, then a Q and A session.

But that’s not how this debate will go. Stay tuned for the drama…

WLC makes his way to the podium and begins. I’ve presented his arguments here virtually as-is, with just a bit of editing-out of examples and recapitulations, etc. Go to the debate for the full version.

WLC : I believe that God’s existence best explains a wide range of the data of human experience. Let me mention eight.

First, God is the best explanation of why anything at all exists. Suppose you see a ball by the roadside and you wonder how it got there, and your mate says ‘don’t worry about it, it just exists, there’s no explanation for it’, you’d think this was crazy, and you’d think the same thing even if the ball was swollen up to the size of the universe. So what is the explanation of the universe? It can lie only in a transcendent reality, beyond the material universe, and this transcendent reality is metaphysically necessary in its existence. Now there’s surely only one way to get a contingent universe out of a necessarily existing cause, and that is if the cause is a personal agent who can freely choose to create a contingent reality. It therefore follows that the best explanation of the contingent universe is a transcendent, personal, being, that’s to say, God. In sum, 1. Every contingent thing has an explanation of its existence. 2. If the universe has an explanation of its existence, that explanation is a transcendent, personal being. 3. The universe is a contingent thing. 4 Therefore the universe has an explanation of its existence (from 1,3). 5. Therefore the explanation of the universe is a transcendent, personal being (from 2,4).

Second, God is the best explanation of the origin of the universe. We have strong evidence that the universe isn’t eternal in the past but had an absolute beginning. In 2003, Borde, Guth and Valenkin were able to prove that any universe which has on average been in a state of cosmic expansion, cannot be infinite in the past but must have a past space-time boundary. What makes their proof so powerful is that it holds regardless of the physical description of the very early universe. Because we don’t yet have a quantum theory of gravity, we can’t yet provide a physical description of the first split-second of the universe, but the B-G-V theorem is independent of any physical description of that moment. Their theorem implies that the quantum vacuum state, which may have characterized the early universe, cannot be eternal in the past, but must have had an absolute beginning. Even if our universe is just a tiny part of a so-called multiverse composed of many universes, their theorem requires that the multiverse itself must have had an absolute beginning. Of course, highly speculative scenarios, such as loop quantum gravity models, string models, even closed time-like curves have been proposed to try to avoid this absolute beginning. These models are fraught with problems, but the bottom line is that none of these models, even if true, succeeds in restoring an eternal past. Last spring at a conference in Cambridge celebrating the 70th birthday of Stephen Hawking, Valenkin delivered a paper entitled, ‘Did the Universe have a beginning?’, which surveyed current cosmology with respect to that question. He argued, and I quote, ‘none of these scenarios can actually be past-eternal. He concluded, ‘all the evidence we have says that the universe had a beginning’. But then the inevitable question arises, why did the universe come into being, what brought the universe into existence? There must have been a transcendent cause which brought the universe into being. In summary, 1. The universe began to exist. 2. If the universe began to exist, then the universe has a transcendent cause. 3 Therefore, the universe has a transcendent cause. By the very nature of the case, that cause must be a transcendent, immaterial being. Now, there are only two possible things that can fit that description. Either an abstract object, like a number, or an unembodied mind or consciousness. But abstract objects don’t stand in causal relations. Therefore the cause of the universe is plausibly an unembodied mind or person, and thus we are brought not merely to a transcendent cause of the universe, but to its personal creator.

Third. God is the best explanation of the applicability of mathematics to the physical world. Philosophers and scientists have puzzled over what physicist Eugene Wigner called, ‘the uncanny effectiveness of mathematics’. How is it that a mathematical theorist like Peter Higgs can sit down at his desk and predict through calculation the existence of a fundamental particle which experimentalists thirty years later after investing millions of dollars and thousands of man-hours are finally able to detect? Mathematics is the language of nature. But how is this to be explained? If mathematical objects are abstract entities, causally isolated from the universe, then the applicability of mathematics is in the words of philosopher of mathematics Penelope Maddy, ‘a happy coincidence’. On the other hand if mathematical objects are just useful fictions, how is it that nature is written in the language of these fictions? In his book Dr Rosenberg emphasizes that naturalism doesn’t tolerate cosmic coincidences, but the naturalist has no explanation of the uncanny applicability of mathematics to the physical world. By contrast the theist has a ready explanation. When God created the physical universe he designed it on the mathematical structure he had in mind. We can summarize this argument: 1. If God did not exist, the applicability of mathematics would be a happy coincidence. 2. The applicability of mathematics is not a happy coincidence. 3. Therefore God exists.

Fourth. God is the best explanation for the fine-tuning of the universe for intelligent life. In recent decades, scientists have been stunned by the discovery that the initial conditions of the big bang were fine-tuned for the existence of intelligent life with a precision and delicacy that literally defy human comprehension. Now there are three live explanatory options for this extraordinary fine-tuning. Physical necessity, chance, or design. Physical necessity is not, however, a plausible explanation because the finely tuned constants and quantities are independent of the laws of nature and therefore they are not physically necessary. So could the fine-tuning be due to chance? The problem with this explanation is that the odds of a life-permitting universe gotten by our laws of nature are so infinitesimal that they cannot be reasonably faced. Therefore the proponents of chance have been forced to postulate the existence of a world-ensemble of other universes, preferably infinite in number and randomly ordered so that life-permitting universes would appear by chance somewhere in the ensemble. Not only is this hypothesis to borrow Richard Dawkins’ phrase an ‘unparsimonious extravagance’, but, it faces an insuperable objection. By far, most of the observable universes in a world-ensemble would be worlds in which a single brain fluctuates into existence out of the vacuum and observes its otherwise empty world. Thus if our world were just a random member of a world-ensemble, we ought to be having observations like that. Since we don’t, that strongly disconfirms the world-ensemble hypothesis. So chance is also not a good explanation. It follows that design is the best explanation of the fine-tuning of the universe, and thus the fine-tuning of the universe constitutes evidence for a cosmic designer.

Fifth. God is the best explanation of intentional states of consciousness in the world. Philosophers are puzzled by states of intentionality. Intentionality is the property of being about something, or of something. It signifies the object-directedness of our thoughts. For example I can think about my summer vacation, or I can think of my wife. No physical object has this sort of intentionality. A chair, or a stone, or a glob of tissue like the brain is not about, or ‘of’ something else, only mental states or states of consciousness are about other things. As a materialist, Dr Rosenberg recognizes this fact, and so concludes that on atheism there really are no intentional states. Dr Rosenberg boldly claims that we never really think about anything. But this seems incredible. Obviously, I am thinking about Dr Rosenberg’s argument. This seems to me to be a reductio ad absurdum of atheism. By contrast, on theism, because God is a mind, it’s hardly surprising that there should be finite minds. Thus intentional states fit comfortably into a theistic worldview. So we can argue 1. If God did not exist, intentional states of consciousness would not exist. 2. But intentional states of consciousness do exist. 3 Therefore God exists.

Sixth. God is the best explanation of objective moral values and duties in the world. In moral experience we apprehend moral values and duties which impose themselves as objectively binding and true. For example, we all recognize that it’s wrong to walk into a school and to shoot little children and their teachers. On a naturalistic view however there’s nothing really wrong with this. Moral values are just the subjective by-product of biological evolution and social conditioning. Dr Rosenberg is brutally honest about the implications of his atheism. He writes ‘there’s no such thing as morally right or wrong, individual human life is meaningless and without ultimate moral value. We need to face the fact that nihilism is true.’ By contrast the theist grounds objective moral values in God and our moral duties in his commands. The theist thus has the explanatory resources which the atheist lacks to ground objective moral values and duties. Hence we may argue 1 objective moral values and duties exist 2 but if God did not exist, objective moral values and duties would not exist 3 therefore God exists.

seven. God is the best explanation of the historical facts about Jesus of Nazareth. Historians have reached something of a consensus that Jesus came on the scene with an unprecedented sense of divine authority, the authority to stand and speak in God’s place. He claimed that in himself the kingdom of God had come., and as visible demonstrations of this fact he carried out a ministry of miracle-working and exorcisms. But the supreme confirmation of his claim was his resurrection from the dead. If Jesus did indeed rise from the dead, then it would seem that we have a divine miracle on our hands and thus evidence for the existence of God. Now I realize that most people think that the resurrection of Jesus is something you just accept by faith, or not, but there are actually three facts recognized by the majority of historians, which I believe, are best explained by the resurrection of Jesus. Fact 1 On the Sunday after his crucifixion, Jesus’s tomb was found empty by a group of his women followers. 2 On separate occasions different individuals and groups of people saw appearances of Jesus alive after his death, and 3 The original disciples suddenly came to believe in the resurrection of Jesus, despite having every predisposition to the contrary. The eminent British scholar N T Wright near the end of his 800-page study of the historicity of Jesus’s resurrection, concludes that the empty tomb and post-mortem appearances of Jesus had been established to such a high degree of historical probability as to be ‘virtually certain, akin to the death of Caesar Augustus in AD 17 or the fall of Jerusalem in AD 70’. Naturalistic attempts to explain away these three great facts, like ‘the disciples stole the body’ or ‘Jesus wasn’t really dead’ have been universally rejected by contemporary scholarship. The simple fact is that there just is no plausible naturalistic explanation of these facts. And therefore it seems to me that the Christian is amply justified in believing that Jesus rose from the dead and was who he claimed to be. But that entails that God exists. Thus we have a good inductive argument to the existence of God based on the facts concerning the resurrection of Jesus.

Eight. God can be personally known and experienced. This isn’t really an argument for God’s existence, rather it’s the claim that you can know that God exists wholly apart from arguments, simply by personally experiencing him. Philosophers call beliefs like this ‘properly basic beliefs’. They aren’t based on some other beliefs, rather they’re part of the foundations of a person’s system of beliefs. Other properly basic beliefs would be belief in the reality of the past, or the existence of the external world. In the same way, belief in God is for those who seek him as properly basic, grounded in our experience of God. Now if this is so then there’s a danger that arguments for God could actually distract our attention from God himself. The Bible promises ‘draw near to God and he will draw near to you’. We mustn’t so concentrate on the external proofs that we fail to hear the inner voice of God speaking to our own hearts. For those who listen, God becomes a personal reality in their lives.

In summary then we’ve seen eight respects in which God provides a better explanation of the world than naturalism. For all of these reasons I believe that belief in God is eminently reasonable. If [the ineffable Mr Funesti-Sordido] is to persuade us otherwise, he must first tear down all eight of the reasons I’ve presented, and then in their place erect a case of his own to show why belief in God is unreasonable. Unless and until he does that, I think we should agree that it is reasonable to believe in God.

After the voluminous applause dies away, the redoubtable Luigi Funesti-Sordido, Founding Secretary of the (new) USSR rises to the occasion, and changes the world….

Written by stewart henderson

March 13, 2013 at 12:14 pm