an autodidact meets a dilettante…

‘Rise above yourself and grasp the world’ Archimedes – attribution

Archive for the ‘humanism’ Category

1914 – 2014: celebrating a loss of appetite

leave a comment »



I’ve read at least enough about WW1 to be aware that its causes, and the steps made towards war, were very complex and contestable. There are plenty of historians, professional and amateur, who’ve suggested that, if not for x, or y, war may have been avoided. However, I don’t think there’s any doubt that a ‘force’, one which barely exists today, a force felt by all sides in the potential conflict of the time, made war very difficult to avoid. I’ll call this force the appetite for war, but it needs to be understood more deeply, to divest it of its vagueness. We know that, in 1914, lads as young as 14 sneaked their way into the militaries of their respective countries to experience the irresistible thrill of warfare. A great many of them paid the ultimate price. Few of these lambs to the slaughter were discouraged from their actions – on the contrary. Yet 100 years on, this attitude seems bizarre, disgusting and obscene. And we don’t even seem to realise how extraordinarily fulsome this transformation has been.

Let’s attempt to go back to those days. They were the days when the size of your empire was the measure of your manliness. The Brits had a nice big fat one, and the Germans were sorely annoyed, having come late to nationhood and united military might, but with few foreign territories left to conquer and dominate. They continued to build up their arsenal while fuming with frustration. Expansionism was the goal of all the powerful nations, as it always had been, and in earlier centuries, as I’ve already outlined, it was at the heart of scores of bloody European conflicts. In fact, it’s probably fair to say that the years of uneasy peace before 1914 contributed to the inevitability of the conflict. Peace was considered an almost ‘unnatural’ state, leading to lily-livered namby-pambiness in the youth of Europe. Another character-building, manly war was long overdue.

Of course, all these expansionist wars of the past led mostly to stalemates and backwards and forwards exchanges of territory, not to mention mountains of dead bodies and lakes of blood, but they made numerous heroic reputations – Holy Roman Emperor Charles V and his son Philip II of Spain, Gustavus Adolphus of Sweden, Frederick the Great of Prussia, Peter the Great of Russia, Louis XIV of France and of course Napoleon Bonaparte. These ‘greats’ of the past have always evoked mixed reactions in me, and the feelings are well summed up by Pinker in The Better Angels of our Nature:

The historic figures who earned the honorific ‘So-and-So the Great’ were not great artists, scholars, doctors or inventors, people who enhanced human happiness or wisdom. They were dictators who conquered large swaths of territory and the people in them. If Hitler’s luck had held out a bit longer, he probably would have gone down in history as Adolf the Great.

While I’m not entirely sure about that last sentence, these reflections are themselves an indication of how far we’ve come, and how far we’ve been affected by the wholesale slaughter of two world wars and the madness of the ‘mutually assured destruction’ era that followed them. The fact that we’ve now achieved a military might far beyond the average person’s ability to comprehend, rendering obsolete the old world of battlefields and physical heroics, has definitely removed much of the thrill of combat, now more safely satisfied in computer games. But let’s return again to that other country, the past.

In the same month that the war began, August 1914, the Order of the White Feather was founded, with the support of a number of prominent women of the time, including the author and anti-suffragette Mrs Humphrey Ward (whom we might now call Mary) and the suffragette leaders Emmeline and Cristobel Pankhurst. It was extremely popular, so much so that it interfered with government objectives – white feathers were sent even to those convalescing from the horrors of the front lines, and to those dedicated to arms manufacturing in their home countries. Any male of a certain age who wasn’t in uniform or ‘over there’ was fair game. Not that the white feather idea was new with WWI – it had been made popular by the novel The Four Feathers (1902), set in the First War of Sudan in 1882, and the idea had been used in the British Empire since the eighteenth century – but it reached a crescendo of popularity, a last explosive gasp – or not quite, for it was revived briefly during WWII, but since then, and partly as a result of the greater awareness of the carnage of WWI, the white feather has been used more as a symbol of peace and pacifism. The Quakers in particular took it to heart as a badge of honour, and it became a symbol for the British Peace Pledge Union (PPU) in the thirties, a pacifist organisation with a number of distinguished writers and intellectuals, such as Aldous Huxley, Bertrand Russell and Storm Jameson.

There was no PPU or anything like it, however, in the years before WWI. Yet the enthusiasm for war of 1914 soon met with harsh reality in the form of Ypres and the Somme. By the end of 1915 the British Army was ‘depleted’ to the tune of over half a million men, and conscription was introduced, for the first time ever in Britain, in 1916. It had been mooted for some time, for of course the war had been catastrophic for ordinary soldiers from the start, and it quickly became clear that more bodies were needed. Not surprisingly, though, resistance to the carnage had begun to grow. An organisation called the No-Conscription Fellowship (NCF), consisting mainly of socialists and Quakers, was established, and it campaigned successfully to have a ‘conscience clause’ inserted in the 1916 Military Service (conscription) Act. The clause allowed people to refuse military service if it conflicted with their beliefs, but they had to argue their case before a tribunal. Of course ‘conshies’ were treated with some disdain, and were less tolerated by the British government as the war proceeded, during which time the Military Service Act was expanded, first to include married men up to 41 years of age (the original Act had become known as the Batchelor’s Bill) and later to include men up to 51 years of age. But the British government’s attitude didn’t necessarily represent that of the British people, and the NCF and related organisations grew in numbers as the war progressed, in spite of government and jingoist media campaigns to suppress them.

In Australia, two conscription bills, in 1916 and 1917, failed by a slim majority. In New Zealand, the government simply imposed the Military Service Act on its people without bothering to ask them. Those who resisted were often treated brutally, but their numbers increased as the war progressed. However, at no time, in any of the warring nations, did the anti-warriors have the numbers to be a threat to their governments’ ‘sunken assets’ policies.

So why was there such an appetite then and why is the return of such an appetite unthinkable today? Can we just put it down to progress? Many skeptics are rightly suspicious of ‘progress’ as a term that breeds complacency and even an undeserved sense of superiority over the primitives of the past, but Pinker and others have argued cogently for a civilising process that has operated, albeit partially and at varying rates in various states, since well before WWI, indeed since the emergence of governments of all stripes. The cost, in human suffering, of WWI and WWII, and the increasingly sophisticated killing technology that has recently made warfare as unimaginable and remote as quantum mechanics, have led to a ‘long peace’ in the heart of Europe at least – a region which, as my previous posts have shown, experienced almost perpetual warfare for centuries. We shouldn’t, of course, assume that the present stability will be the future norm, but there are reasons for optimism (as far as warfare and violence is concerned – the dangers for humanity lie elsewhere).

Firstly, the human rights movement, in the form of an international movement dedicated to peace and stability between nations for the sake of their citizens, was born out of WWI in the form of the League of Nations, which, while not strong enough to resist the Nazi impetus toward war in the thirties, formed the structural foundation for the later United Nations. The UN is, IMHO, a deeply flawed organisation, based as it is on the false premise of national sovereignty and the inward thinking thus entailed, but as an interim institution for settling disputes and at least trying to keep the peace, it’s far better than nothing. For example, towards the end of the 20th century, the concepts of crimes against humanity and genocide were given more legal bite, and heads of state began, for the first time in history, to be held accountable for their actions in international criminal courts run by the UN. Obviously, considering the invasion of Iraq and other atrocities, we have a long way to go, but hopefully one day even the the most powerful and, ipso facto, most bullying nations will be forced to submit to international law.

Secondly, a more universal and comprehensive education system in the west, which over the past century and particularly in recent decades, has emphasised critical thinking and individual autonomy, has been a major factor in the questioning of warfare and conscription, and in recognising the value of children and youth, and loosening the grip of authority figures. People are far less easily conned into going into war than ever before, and are generally more sceptical of their governments.

Thirdly, globalism and the internationalism of our economy, our science. our communications systems, and the problems we face, such as energy, food production and climate change, have meant that international co-operation is far more  important to us than empire-building. Science, for those literate enough to understand it, has all but destroyed the notion of race and all the baggage attend upon it. There are fewer barriers to empathy – to attack other nations is tantamount to attacking ourselves. The United Nations, ironic though that title often appears to be, has spawned or inspired many other organisations of international co-operation, from the ICC to the Intergovernmental Panel on Climate Change.

There are many other related developments which have moved us towards co-operation and away from belligerence, among them being the greater democratisation of nations – the enlargement of the franchise in existing democracies or pro to-democracies, and the democratisation of former Warsaw Pact and ‘Soviet Socialist’ nations – and the growing similarity of national interests, leading to more information and trade exchanges.

So there’s no sense that the ‘long peace’ in Europe, so often discussed and analysed, is going to be broken in the foreseeable future. To be sure, it hasn’t been perfect, with the invasions of Hungary in 1956 and Czechoslovakia in 1968, and the not-so-minor Balkans War of the 90s, and I’m not sure if the Ukraine is a European country (and neither are many Ukrainians it seems), but the broad movements are definitely towards co-operation in Europe, movements that we can only hope will continue to spread worldwide.

Written by stewart henderson

August 22, 2014 at 9:05 am

the rise of the nones, or, reasons to be cheerful (within limits)

leave a comment »

This is a presentation based on a couple of graphs.

The rise of the nones, that is, those who answer ‘none’ when asked about their religious affiliation in surveys and censuses, has been one of the most spectacular and often unheralded, developments of the last century in the west. It has been most spectacular in the past 50 years, and it appears to be accelerating.

The rise of the nones in Australia


This graph tells a fascinating story about the rise of the nones in Australia. It’s a story that would I think, share many features with other western countries, such as New Zealand and Canada, but also the UK and most Western European nations, though there would be obvious differences in their Christian make-up.

The graph comes from the Australian Census Bureau, and it presents the answers given by Australians to the religious question in the census in every year from 1901 to 2011. The blue bar represents Anglicans. In the early 20th century, Anglicanism was the dominant religion, peaking in 1921 at about 43% of the population. Its decline in recent years has been rapid. English immigration has obviously slowed in recent decades, and Anglicanism is on the nose now even in England. In 2011, only 17% of Australians identified as Anglicans.  The decline is unlikely to reverse itself, obviously.

The red striped bar represents Catholics – I’ll come to them in a moment. The grey hatched bar represents devotees of other Christian denominations. In the last census, just under 19% of Australians were in that category, and the percentage is declining. The category is internally dynamic, however, with Uniting Church, Presbyterian and Lutheran believers dropping rapidly and Pentecostals very much on the rise.

The green hatched bar represents the nones, first represented in 1971, when the option of saying ‘none’ was first introduced. This was as a result of pressure from the sixties censuses – that seminal decade – when people were declaring that they had no religion even when there was no provision in the census to do so. Immediately, as you can see, a substantial number of nones ‘came out’ in the 71 census, and the percentage of ‘refuseniks’ (the purple bar) was almost halved. But then in the 76 census, the percentage of refuseniks doubled again, while the percentage of nones increased. The Christians were the ones losing out, a trend that has continued to the present. Between 1996 and 2006 the percentage of self-identifying Christians dropped from 71% to 64% – a staggering drop in 10 years. The figure now, after the 2011 census, is down to 61%. If this trend continues, the percentage of Christians will drop below 50% by the time of the 2031 census. Of course predictions are always difficult, especially about the future.

One thing is surely certain, though. Whether or not the decline in Christianity accelerates, it isn’t going to be reversed. As Heinrich von Kleist put it, ‘When once we’ve eaten of the tree of knowledge, we can never return to the state of innocence’.

The situation after the 2011 census is that 22.3% of Australia’s population are nones, the second biggest category in the census. Catholics are the biggest with 25.3%, down from 26% in 2006 (and about 26.5% in 2001). The nones are on track to be the biggest category after the next census, or the one after that. Arguably, though, it’s already the biggest category. The refusenik category in the last census comprised 9.4%, of which at least half could fairly be counted as nones, given that the religious tend to want to be counted as such. That would take the  nones up to around 27%. An extraordinary result for a category first included only 40 years ago.

Let me dwell briefly on this extraordinariness. As you can see, in the first three censuses presented in this graph, the percentage of professed Christians was in the high nineties. That’s to say, in the first two decades of the twentieth century, virtually everyone one identified as Christian. This represents the arse-end of a scenario that persisted for a thousand years, dating back to the 9th and 10h centuries when the Vikings and the last northern tribes were converted from paganism. We are witnessing nothing less than the death throes of Christianity in the west. Of course, we’re only at the beginning, and it will be, I’m sure, a long long death agony. Catholicism still has an iron grip in South America, in spite of the scandals it’s failing to deal with, and it’s making headway in Africa. But in its heartland, in its own backyard, its power is greatly diminished, and their’s no turning back.

The rise of the nones worldwide


But there’s an even more exciting story to tell here. The rise of the nones isn’t simply a rejection of Christianity, it’s a rejection of religion. And with that I’ll go to my second graph. This shows that the nones, at 750 million, have risen quickly to be the fourth largest religious category after Christians, 2.2 billion, Moslems, 1.6 billion, and Hindus, 900 million. These numbers represent substantial proportions of the populations of Australia and New Zealand, Canada, the USA and western Europe, as well as nations outside the Christian tradition, such as China and Japan. Never before in human history has this been the case.

One thing we know about the early civilisations is that they were profoundly religious. The Sumerians of the third millennium BCE, the earliest of whom we have records, worshipped at least four principal gods, Anu, Enlil, Ninhursag and Enki. These, as well as the Egyptian god Amon Ra, are among the oldest gods we can be certain about, but it’s likely that some of the figurines and statues recovered by archaeologists, such as the 23,000-year-old Venus of Willendorf, represented deities.

Why was religion so universal in earlier times?

We don’t know if the ancient Sumerians and Egyptians and Indus Valley civilisations were universally religious, but it’s likely that they were – because supernatural agency offered the best explanation for events that couldn’t be explained otherwise. And there were an awful lot of such events. Why did the crop fails this time?  Why has the weather changed so much? Why did my child sicken and die? Why has this plague been visited upon our people? Why did that nearby mountain blow its top  and rain fire and burning rocks down on us?

Even today, in our insurance policies, ‘acts of god’ – a most revealing phrase – are mentioned as those unforeseen events that insurers are reluctant to provide cover for. Nowadays, when some fundie describes the Haitian earthquake or Hurricane Katrina as a deliberate act of a punishing god, we laugh or feel disgusted, but this was a standard response to disasters in earlier civilisations. Given our default tendency to attribute agency when in doubt – a very useful evolutionary trait – and our ancestors’ lack of knowledge about human origins, disease, climate, natural disasters, etc, it’s hardly surprising that they would assume that non-material paternal/maternal figures, resembling the all-powerful and often capricious beings who surrounded us in our young years, and whose ways are ever mysterious, would be the cause of so many of our unlooked-for joys and miseries.

Why has that universality flown out the window?

It’s hardly surprising then that the rise of the nones in the west coincides with the rising success and the growing explanatory power of science. For the nones, creation myths have been replaced by evolution, geology and cosmology, sin has been replaced by psychology, and a judging god has been replaced by the constabulary and the judiciary. I don’t personally believe that non-believers are morally superior to believers because we ‘know how to be good without god’. We’ve just transferred our fear of god to our fear of the CC-TV cameras – as well as fear for our reputations in the new ultra-connected ‘social hub’.

It’s obvious though that the scientific challenge to ye olde Acts of God is very uneven wordwide. In the more impoverished and heavily tribalised parts of Africa, India, China and the Middle East, the challenge is virtually non-existent. Furthermore, it’s a very new challenge even in the west. To take one example, our understanding of earthquakes, tsunamis and volcanic activity has greatly increased in recent times through advances in technology and also in theory, most notably tectonic plate theory. This theory was first advanced in the early 20th century by Alfred Wegener amongst others, but it didn’t gain general scientific acceptance until the sixties and didn’t penetrate to the general public till the seventies and eighties. Even today in many western countries if you ask people about plate tectonics they’ll shrug or give vague accounts. And if you think plate tectonics is simple, have a look at any scientific paper about it and you’ll soon realise otherwise. Of course the same goes for just about any scientific theory. Science is a hard slog, while the idea of acts of god comes to us almost as naturally as breathing.

In spite of this science is beginning to win the challenge, due to a couple of factors. First and foremost is that the scientific approach, and the technology that has emerged from it, has been enormously successful in transforming our world. Second, our western education system, increasingly based on critical thinking and questioning, has undermined religious concepts and has given us the self-confidence to back our own judgments and to emerge from the master-slave relationships religion engenders. The old god of the gaps is finding those gaps narrowing, though of course the gaps in many people’s minds are plenty big enough for him to hold court there for the term of their natural lives.

The future for the nones

While there’s little doubt that polities such as Australia, New Zealand, Canada and the European Union will become increasingly less religious, and that other major polities such as China and Japan are unlikely to ‘find’ religion in the future, we shouldn’t kid ourselves that any of the major religions are going to disappear in our lifetimes or those of our grandchildren. Africa and some parts of Asia will continue to be fertile hunting grounds for the two major proselytising religions, and Islam has as firm a hold on the Middle East as Catholicism has on Latin America. If you’re looking at it in terms of numbers, clearly the fastest growing parts of the world are also the most religious. But of course it’s not just a numbers game, it’s also about power and influence. In all of the secularising countries, including the USA, it’s the educated elites that are the most secular. These are the people who will be developing the technologies of the future, and making decisions about the future directions of our culture and our education.  So, yes, reasons to be cheerful for future generations. I look forward to witnessing the changing scene for as long as I can

what can we learn from religion?

with one comment



Those are not at all to be tolerated who deny the Being of a God. Promises, Covenants and Oaths, which are the Bonds of Humane Society, can have no hold upon an Atheist.

John Locke, ‘A letter concerning toleration’, 1689

In my last post I referred to some aspects of religious belief that I think are worth focusing on if we want to get past the rational/irrational, or even the true/false debates. Alain de Botton created quite a stir recently when he claimed that arguments about the truth/falsity of religion were boring and without much value – or something like that. Typically, I both agree and disagree. There are essential empirical questions at stake, as I argued in my critique of Stephen Jay Gould here, but they’re hardly key to getting a handle on religion’s enormous popularity and endurance. That requires a deeper understanding of the psychological underpinnings of religious belief.

First, I’ve already written of the fact that, for all very young children, adults are supernatural beings. They’ve yet to learn about human mortality and limitations. They certainly learn quickly about their own pain and discomfort, but it comes as a shock when they first observe that all these competent, powerful, protective giants can be hurt, angry and frustrated just like them. These findings should hardly surprise us – children at this stage are entirely dependent on adults for their survival. These adults, they observe, can throw them up in the air and hopefully catch them, they can walk across a room in three seconds flat, they can transport them by car or plane to a completely different world, they’re not afraid of anything, and they miraculously provide all sustenance and succour.

While non-believers mostly understand such basic childhood beliefs, many are highly impatient of those who haven’t, at an appropriate age, abandoned this ‘theory of mind’ and replaced it with a more rational or sophisticated scientific worldview. The response of many psychologists in the field would be that, yes, we do change, but the idea of the supernatural, of transcending the usual limitations, has a long, lingering effect. The popularity of fairies, Harry Potter and Spiderman, which take us through early childhood into adolescence and beyond, attests to this. It’s worth noting that the nerdiest atheists are avid Trekkies and Whovians.

But none of this is really disturbing or unhealthy in the way that religious belief seems to be in the eyes of many non-believers – such as myself. The world’s most secular polities – in Australia, New Zealand, Canada, Japan, and in many European countries, are also the most law-abiding, secure and contented, as countless surveys show. As a regular dipper into history, I can’t help but note that social life in god-obsessed pre-Enlightenment Europe was far more volatile, cruel and corrupt than it is today in the era of democracy, human rights and secularism. Locke’s remarks above, have been throughly refuted by modern experience – though I suspect this is due to having a more regularised legal framework and a functioning police force than to the greater moral virtue of non-believers.

So for many of us, the point is not to understand religion, but to change it. Or rather, to neutralise it by understanding it and then applying that understanding within a more secular framework. For example, one of the themes of the religious is that you can’t be good without god x, y or z. Atheists rarely concede that theists might have a point here. The stock response is a personal one ‘I don’t need a supernatural fantasy-figure to frighten me into being good, I’m good because I have respect for others and for my environment’, etc. Psychological study, however, tells us a different story.

The Lebanese-born social psychologist Ara Norenzayan, at the University of British Columbia, points out that many of the gods of small societies have little interest in morality. Instead, ‘being good’ in these small societies is enforced by their very size, and their inescapability. Kin altruism and reciprocity, being the subject of gossip, the fear of ostracism, these are what keep society members on the right track. As numbers increase, though, a sense of anonymity engenders a greater tendency towards cheating and self-serving behaviours. Studies show that even wearing dark glasses, like the Tontons Macoutes, makes it easier to engage in anti-social behaviour. People behave much better when watched, by an audience, by a camera, and even by a large drawing of an eye in the corner of a shop.

The idea that non-believers can be ‘tricked’ into behaving better by the picture of an eye watching them should make us think again, not about gods, but about being watched. And about how we still over-determine for agency in our thinking. Civil libertarians get their backs up about CC-TV cameras on every street corner, but there’s no doubt they’ve been a success in catching robbers and muggers and king-hitters in the act, or just before or after. Even those of us with no urge to steal or who, like me, have left that urge behind long ago, tend to notice when a shop does or doesn’t feature an electronic scanning device, and if they’re like me they’ll wonder about the shop’s vulnerability or otherwise, and the trustworthiness or desperation of the customers around them. As to the painted eye, I presume it doesn’t have the deterrent effect of cameras and scanners, but the fact that it works at all should make us think again about our basic beliefs. Or does it only work on the religious?

That was a joke.

So how do more secular societies utilise the idea that someone knowing if you’ve been bad or good makes for a more moral, or at least law-abiding society? Well, it appears from the statistics that either they’ve already done so, or they’ve found other ways of being good. I suspect it’s been a complex mix of substitute gods, comprehensive education and community expectations. Large scale society has naturally subdivided into smaller groups based on family, business, sport, academic or professional interest and so on, so the age-old stabilisers of kinship, reciprocity and reputation within the group are still there, and these are bolstered by a greater set of ‘watched’ networks. Trade and travel, international relations, the internet, all of these things are always in process of being regulated to reflect community concepts of fairness. We are our own Big Brother (another supernatural agent). Modern liberal education teaches kids from an early age about human rights and environmental responsibility, so much so that they’re often happy to lecture their parents about it. The Freudian concept of the superego is a kind of internalised supernatural parental figure, finger-wagging at us during our weaker moments. The declaration of human rights, accepted by most countries today, though criticised as artificial and without teeth, surely presents a better framework for moral behaviour in the modern world than the often obscure and contradictory stories and proverbs found in the Bible and other religious texts.    In short, there are many ways we’ve worked out for behaving well and generally flourishing in a secular society.

So I’m basically saying there isn’t much we can learn from religion, with respect to moral policing, that we haven’t learned already. But what about community and social bonding? In the USA and in other highly religious societies, the populace seems to be very united in its religion – especially against the irreligious. Some non-believers are concerned to replicate religion’s success in this area, and I’ve heard that there’s an atheist church, or I think they call it an atheist assembly – meeting on Sunday – somewhere in my area. I’m not particularly inclined to attend. Non-believers don’t necessarily have much in common apart from a lack of interest in religion, and I’m wary of in-group thinking anyway. I’m wary of just the kind of bonding above-mentioned, a bonding that might depend upon mutual congratulations and mocking or belittling, or despising, believers.

Non-believers are of course no less community-minded than the religious. Business, sporting, scientific and small-town communities, these attract us as social animals regardless of our views on the supernatural, and I don’t think we need a top-down ‘alternative’ to religious congregations or community spirit as advocated by de Botton.

Many of the religious point out that they’re more involved in charitable works than selfish unbelievers. Where are the atheist alternatives to Centacare and Anglicare, the welfare and social services arms of the Catholic and Anglican denominations? But these organisations have built up their considerable infrastructure and expertise under extremely favourable tax circumstances which have been a part of Australia’s religious history for a couple of centuries, so they’re always more favourably placed to win government and other contracts for social and educational services. I’ve experienced personally the frustrations of humanist organisations trying to attain the same tax-exempt status for charitable purposes. They’re not given a look-in. Nevertheless there are many powerful and effective NGOs such as Oxfam and MSF, and important human rights bodies like Amnesty International and Human Rights Watch, whose impetus comes directly from the secular human rights movement.

I would also argue, as a former employee of Centacare (as an educator) and of Anglicare  (as a foster-carer) that one result of their having cornered so much of the education and social services market is that they’ve become more secularised. They no longer require their workers to share their supernatural beliefs, and this has enabled them to reach a wider market which they’ve been able to expand largely by downplaying or eliminating the proselytising. I’ve never heard any god-talk from Centacare or Anglicare employers, and this would surely not have been the case fifty years ago. It’s the same in Catholic schools I suspect, with so many non-Catholics sending their kids there due to doubts about under-funded state schools.

This is all to the good, as too-exclusive Christian or religious communities – as well as non-religious communities – lead to us-them problems. We need to be secure in our position on the supernatural without being dismissive.

So, what in the end do we have to learn from religion? My answer, frankly, is nothing much. We have far more to learn from history and from clear-minded examination of the evidence we uncover about ourselves and our fellow organisms in this shared biosphere.




Written by stewart henderson

April 25, 2014 at 8:16 am

spirituality issues, encore

leave a comment »

a mob of didges, right way up

a mob of didges, right way up

To me – and I’ve written about this before – the invocation of the supernatural, the ‘call’ of the supernatural, if you will, is something deeply psychological, and so not to be sniffed at, though sniff at it I often do.

I’m prompted to write about this because of a program I saw recently on Heath Ledger (Australia’s own), an understandably romantic, mildly hagiographic presentation, in which a few film directors and friends fondly remembered him as wise beyond his years, with hidden depths, a kind of inner force, a certain je ne sais quoi, that sort of thing. As both a romantic and a skeptic, I was torn as usual. The word ‘spiritual’ was given an airing, unsurprisingly, though mercifully it wasn’t dwelt on. I once came up with my own definition of spirituality: ‘To be spiritual is to believe there’s more to this world than this world, and to know that by believing this you’re a better person than those who don’t believe it’. This might sound a mite cynical but I didn’t mean it to be, or maybe I did.

Anyway one of Ledger’s associates, a film director I think, told this story of the young Heath. A number of friends were partying in his apartment when he, the director, picked up a didgeridoo, which obviously Ledger had brought with him from Australia, and attempted to play it, but not knowing much about the instrument, held it upside-down. Heath gently took it from him and corrected him, saying ‘no, no, if you hold it that way it will lose its power, the power of the instrument and its maker,’ or some such thing. And the seriousness and respectfulness with which this young actor spoke of his didge impressed the director, who considered this a favourite memory, something which caught an ‘essence’ of Ledger that he wanted to preserve.

I’ve been bothered by this tale, and by my ambivalent response to it, ever since. It would be superfluous, I suppose, to say that I don’t believe that briefly holding a didge upside-down has any permanent effect on its musical power.

It’s quite likely that Ledger didn’t believe this either, though you never know. What I’m fairly sure of, though, was that his respectfulness was genuine, and that there was something very likeable, to me at least, in this.

All of this takes me back to a piece I wrote some years ago, since lost, about big and small religions. I was contrasting the ‘big’ religions, like Catholicism and the two main strands of Islam, with their political power in the big world, often horrific in its impact, with the ‘small’ religions or spiritual belief systems, such as those found among Australian Aboriginal or some African societies, who have no political power in the big world but provide their adherents with identity and a kind of social energy that’s marvelous to contemplate. My piece focused on the art work of Emily Kame Kngwarreye, whose prolific and astonishing oeuvre, with its characteristic energy and vitality, clearly owed so much to the beliefs and practices of her ‘mob’, the so-called Utopian Community in Central Australia, between Alice Springs and Tenant Creek to the north.

Those beliefs and practices include dreaming stories and totemic identifications that many western skeptics, such as myself, might find difficult to swallow, in spite of a certain romantic appeal. The fact is, though, that the Utopian Community has been remarkably successful, in terms of the usual measures of well-being, and particularly in the area of health and mortality, compared to other Aboriginal groups, and its success has been put down to tighter community living, an outdoor outstation life, the use of traditional foods and medicines, and a greater resistance to the more destructive western products, such as alcohol.

This might put a red-blooded but reflective skeptic in something of a quandary, and the response might be something like – ‘well, the downside of their vitality and health, derived from spiritual beliefs which have served them well for thousands of years, is that, in order to preserve it, they must live in this bubble of tribal thinking, unpierced by modern evolutionary or cosmological knowledge, and this bubble must inevitably burst.’ Must it? Is there a pathway from tribalism to modern globalism that isn’t entirely destructive? Is the preservation of tribal spiritual beliefs a good thing in itself? Can we take the statement, that holding a didgery-doo upside-down affects its spirit, as a truth over and above, or alongside, the contrasting truths of physical laws?

I don’t know the answer to these questions, of course. Groping my way through these issues, I would say that we should respect and acknowledge those beliefs that give a people their dignity, and which have served them for so long, but perhaps that’s because we’re feeling the generosity of someone outside that system who’s unlikely to be affected or to feel diminished by it. These are, after all, small religions, from our perspective, not the big, profoundly ambitious religions intent on global domination, with their missionaries and their jihadists and their historical trampling of other belief systems, as in Mexico and South America and Africa and here in Australia.

Of course there’s the question – what if those small religions grew bigger and more ambitious? Highly unlikely – but what if?

Written by stewart henderson

February 16, 2014 at 10:22 am

Some thoughts on morality and its origins

with 7 comments


I remember, quite a few Christmases ago now, a slightly acrimonious discussion breaking out about religion and morality. I simply observed – it wasn’t my family. It never is.

A born-again religious woman asked her sister – ‘where do you get your morality from if not from religion?’ She responded tartly, ‘From my mum’. This response pleased one of those present, at least! But as to the implicit claim that we get our morality from religion, my silent response was ‘how does that happen?’

Religion, at least in its monotheistic versions, implies a supernatural being, from whom all morality flows. But if you ask believers whether their cherished supernatural entity talks to them and advises them regularly about the moral decisions they face in their daily lives, you would get, well, a variety of responses, from ‘yes, he does actually’, to something like ‘you miss the point completely’. The second response might lead on to – well, theology. We were given free will, the deity’s ways are mysterious but Good, he communicates with us indirectly, you need to read the signs etc etc. But you’ll be relieved I hope to hear that this won’t be an essay on religion, which you should realise by now I find interminably boring when it tries to connect itself with morality – which is most of the time.

I’m more interested here in trying, inter alia, to define human morality, to determine whether it’s objective, or universal, and if those two terms are synonymous. And as I generally do, I’ll start with a rough and ready, semi-ignorant or uninformed definition, and then try to smarten it up – possibly overturning the original definition in the process.

So, roughly, I consider human morality to be an emergent property of our socially wired brains, something which is, therefore, evolving. I don’t consider it to be objective, because that suggests something outside ourselves, like objective reality. We can talk about it being ‘universal’, as in ‘universal human rights’, which may be agreed upon by consensus, but that’s a convenient fiction, as there’s no true consensus, as, for example, the Cairo Declaration (on human rights in Islam) reveals. Not that we shouldn’t strive for consensus, based on our current understanding of human interests and human thriving. I’m a strong believer in human rights. I suppose what I’m saying here is that my ‘universality’, far from being a metaphysical construction, is a pragmatic term about what we can generally agree on as being what we need in terms of basic liberties, and limitations to those liberties, in order to best thrive, as a thoroughly social species (deeply connected with other species).

So with this rough and ready definition, I want to look at some controversial contributions to the debate, and to add my reflections on them. I read The Moral Landscape, by Sam Harris, a while back, and found it generally agreeable, and was surprised at the apparent backlash against it, though I didn’t try to follow the controversy. However, when philosophers like Patricia Churchland and Simon Blackburn get up and respectfully disagree, finding Harris ‘naive’ and misguided and so forth, I feel it’s probably long overdue for me to get my own views clear.

The difficulty that many see with Harris’s view is encapsulated in the subtitle of his book, ‘How science can determine human values’. I recognised that this claim was asking for trouble, being ‘scientistic’ and all, but I felt sympathetic in that it seemed to me that our increasing knowledge of the world has deeply informed our values. We don’t call Australian Aboriginals or Tierra del Fuegans or Native Americans savages anymore, and we don’t describe women as infantile or prone to hysteria, or homosexuals as insane or unnatural, or children as spoilt by the sparing of the rod, because our knowledge of the human species has greatly advanced, to the point where we feel embarrassed by quite recent history in terms of its ethics. But there’s a big difference between science informing human values, and enriching them, and science being the determinant of human values. Or is there?

What Harris is saying is, forget consensus, forget agreements, morality is about facts, arrived at by reason. He brings this up early on in The Moral Landscape:

… truth has nothing, in principle, to do with consensus: one person can be right, and everyone else can be wrong. Consensus is a guide to discovering what is going on in the world, but that is all that it is. Its presence or absence in no way constrains what may or may not be true.

Clearly one of Harris’s targets, in taking such an uncompromising stance on morality being about truth or facts rather than values, is moral relativism, which he regularly attacks.  Yet the most cogent critics of his views aren’t moral relativists, they’re people, like Blackburn, who question whether the moral realm can ever be seen as a branch of science, however broadly defined (and Harris defines it very broadly for his purposes).  One of the points of dispute  – but there are many others – is the claim that you can’t derive values from facts. For example, no amount of information about genetic variation within human groups can actually determine what you ought to do in terms of discrimination based on perceived racial differences. Such information can and should inform decisions, but they can’t determine them, because they are facts, while values – what you should do with those facts – are categorically different.

It seems to me that Harris often chooses clear-cut issues to highlight morality-as-fact, such as that a secure, healthful, well-educated life is better than one in which you get beaten up on a daily basis. Presumably he imagines that all the gradations in between can be measured precisely as to their truth-value in contributing to well-being. But surely it’s in these difficult areas that questions of value seem to be most ‘subjective’. Can we make an objective moral claim, say, about vegetarianism, true for all people everywhere? What about veganism? I very much doubt it. Yet we also need to look skeptically at those values he sees as clear-cut. Take this example from The Moral Landscape:

In his wonderful book The Blank Slate, Steven Pinker includes  a quotation from the anthropologist Donald Symons that captures the problem of multiculturalism very well:

If only one person in the world held down a terrified, struggling screaming little girl, cut off her genitals with a septic blade, and sewed her back up, leaving only a tiny hole for urine and menstrual flow, the only question would be how severely that person should be punished, and whether the death penalty would be a sufficiently severe sanction. But when millions of people do this, instead of the enormity being magnified millions-fold, suddenly it becomes ”culture”, and thereby magically becomes less, rather than more, horrible, and is even defended by some Western “moral thinkers”, including feminists.

Now, as a card-carrying humanist, and someone generally quite comfortable with the values that, over time, have emerged in my part of the western world, namely Australia, I’m implacably opposed to the practice described here by Symons. But even so, I see a number of problems with this description. And ‘description’ is an important term to think about here, because the way we describe things is an essential indicator of our understanding of the world. The description here is of a ‘procedure’, and it is brief and clinical, leaving aside the depiction of the ‘terrified struggling screaming little girl’. It isn’t a description likely to have much resonance for those who subject their daughters and nieces to this practice. After all, this is a traditional cultural practice, however horrific. It is still practiced regularly in many African countries, and in proximate countries such as Yemen. Clearly the practice aligns with rigid attitudes about the role and place of women in those cultures, attitudes that go back a long way – the first reference to female circumcision, on an Egyptian sarcophagus, dates back almost 4000 years, but it’s likely that it goes back a lot further than that. As Wikipedia puts it, ‘Practitioners see the circumcision rituals as joyful occasions that reinforce community values and ethnic boundaries, and the procedure as an essential element in raising a girl.’

Now, Symons (and presumably Pinker, and Harris) take the view that this is clearly a criminal practice, and that culture should not be used as an excuse. It’s a view backed up by most of the nations in which it occurs, who have instituted laws against it, and in 2012 the UN General Assembly unanimously voted to take all necessary steps to end it, but these national and international good intentions face a long, uphill battle. However, if you look at some of the first descriptions of this practice, by outsiders such as Strabo or Philo of Alexandria, both writing in the time of Christ, you won’t find any censoriousness, nor would you expect to. It was well accepted in the Graeco-Roman world that customs varied widely, and that many foreign customs were weird, wild and wonderful. It’s likely that observers from the dominant culture felt morally superior, as is always the case, but there was no attempt to suppress other cultural practices – any more than there was only 200 hundred years ago, in Australia, with respect to the native inhabitants. The ‘mother country’ sent out clear and regular messages at the time about treating the natives with respect, and non-interference with their cultural practices (though it would no doubt have considered them barbaric and savage as a matter of course). It’s really only in recent times that, as a result of our growing confidence in a universal approach to morality or ‘well-being’, we (the dominant culture) have spoken out against what we now unabashedly call female genital mutilation, as well as other practices such as purdah and witch-hunting.

From all this, you might guess that I’m ambivalent about Harris’s confident approach to moral value. Well, yes and no, he said ambivalently. I can’t tell you how mightily glad I am that I live in a part of the world in which purdah and infibulation aren’t prevalent. However, I can’t step outside of my space and time, and I don’t know what it would be like to live in a world where these practices were standard. And living in such a world doesn’t mean being being transported to it ‘suddenly’, it means being steeped in its values. After all, my own Anglo-Australian culture was one that, less than 200 years ago, transported homeless boys, in danger of ‘going to the bad’, to Australia where they often ended up being worked to death on chain gangs, and this was considered perfectly normal. I would have considered it perfectly normal, for I’m not so arrogant as to imagine I could transcend the moral values of my culture as it was in the 1830s.

So, to return to the passage from The Moral Landscape quoted above. It isn’t a factual passage, it’s a description, with interpretive and speculative features. It describes, first the actions of ‘one person’, engaged in what seems to us an insane surgical procedure, then we’re asked to multiply this act by millions, and ‘suddenly’ consider it culture. But this strikes me as a deliberately manipulative putting of the cart before the horse. The real motive seems to be to ask us to dismiss culture altogether. After all, any human product that can be called into being ‘suddenly’, and which ‘magically’ blights our moral understanding of the world cannot surely be taken seriously.  Harris, as I recall, used similar arguments against religion, perhaps in The End of Faith (which I haven’t read), but certainly in some of his talks on the subject. A practice or belief which we might lock someone up for, ‘suddenly’ becomes acceptable when engaged in by millions and called ‘religion’.

This strikes me as a glib and naive argument, which could only appeal to historically uninformed (or indifferent) ‘rationalists’. Cultural and religious beliefs and practices, weird, wild, wonderful and occasionally horrifying though they might be, are far too widespread, and too deeply woven into the identity of individuals and social groups, to be set aside in this way.

This is a very very complex issue, one that, dare I say, middle-class intellectuals like Harris and Pinker tend to skate over, even with a degree of contempt.  For myself, I deal with these cultural issues with a mixture of fear –  ‘don’t provoke the culturally wounded, they’ll just get angry and dangerous’ – and concern  – ‘if you take away these people’s cultural/religious identity, how will they cope?’. Perhaps I’m being arrogant about the power of western secular values, but it seems to me that much of the world’s turmoil comes from resentment at old cultural and religious certainties being undermined.

So I believe in cultural sensitivity, for strategic purposes but also because we are all culturally embedded, no matter how scientifically enlightened we claim to be. However, I don’t think all cultures are, or all culture is, equally valuable or equally healthy. How I measure that, though, is a big question since I can’t step outside of my own culture. Perhaps therein lies the difficulty about getting all ‘scientific’ about morality. Science itself is hardly culture-free – a dangerous point to make in some circles.

So I don’t think I’ve gotten much further as to where morality comes from. To say that it comes from culture requires a thorough definition and understanding of that concept, otherwise we’re just deferring any real explanation, but clearly that is the way to go.  But I prefer to look at this connection with culture, and with other more fundamental aspects of our social nature, from a  humanist perspective. Western secular humanism tends to wear its culture lightly, and to value skepticism, reflection and analysis as – possibly cultural – tools for dismantling or at least loosening the overly heavy and oppressive armour that cultural beliefs and practices can become.

Written by stewart henderson

January 4, 2014 at 12:09 am

on transcendental constructions: a critique of Scott Atran

with one comment



Some years ago, when watching some of the talks and debates in the first ‘Beyond Belief’ conference at the Salk Institute, I noted some tension between Sam Harris and his critique of religion generally and Islam in particular, and Scott Atran, an anthropologist, who appeared to be quite contemptuous of Harris’s views. Beyond noting the tension, I didn’t pay too much attention to it at the time, but I’ve decided now to look at this issue more closely because I’ve just read Ayaan Hirsi Ali’s powerful book Infidel, which gives an insider’s informed and critical view of Islam, particularly from a woman’s perspective, and I’ve also listened to Chris Mooney’s Point of Inquiry interview with Atran back in April, shortly after the Boston marathon bombing.

The interview, called ‘What makes a terrorist?’ was mainly about the psychology of the more recent batch of terrorists, but in the latter half, Atran responded to a question about the role of Islam specifically in recent terrorist behaviour. It’s this response I want to examine, not so much in the light of Sam Harris’s contrasting views, but in comparison to those of Hirsi Ali.

In bringing up the role of Islam in terrorism, Chris Mooney cites Sam Harris as pointing out that ‘there’s something about Islam today that is more violent’. Atran’s immediate response is that ‘this is such a complex and confused issue’, then he says that ‘religions are fairly neutral vessels’. This idea that religions, especially those that survive over time, have a degree of neutrality to them, has some truth, and in fact it served as the basis for my critique of Melvyn Bragg’s absurd claims that Christianity and the KJV Bible were largely responsible for feminism, democracy and the anti-slavery movement. But there is a limit to this ‘neutrality’. Religions are clearly not so ‘neutral’, morally or culturally, that they’re interchangeable with each other. Fundamentalist, or ultra-orthodox, or ultra-conservative Judaism is not the same as its Islamic or Christian counterparts. In fact, far from it. And yet these three religions ostensibly share the same deity.

The interaction between religion and culture is almost impenetrably complex. I wrote about this years ago in an essay about traditional Australian Aboriginal religion/culture, in which it’s reasonable to say that religion is culture and culture is religion. In such a setting, apostasy would be meaningless or impossible – essentially a denial of one’s own identity. Having said that, if your religion, via one of its principal texts, tells you that apostasy is punishable by death, you’ve already got a yawning separation between religion and cultural identity – the very reason for the excessive threat of punishment is to desperately try to plug that gap. It’s like the desperate cry of a father – ‘you’ll never amount to anything without me!’ – as the son walks out the door for the last time.

These major religions – Judaism, Islam and Christianity – are embedded in texts that are embedded in culture. Different, varied texts interacting complexly – reinforcing, challenging, altering the culture from whence they sprung. Differently. Judaism’s major text, always arguably, is the Torah. Christianity’s is the New Testament, or is it the gospels? Islamic scholars – but also those believers who rarely ever read the sacred texts – will argue about which texts are most important and why. Nevertheless, Judaism, Christianity and Islam all have a different feel to them from each other, even given the enormous variation within each religion. Judaism is profoundly insular, with its chosen people uniquely flayed by their demanding, unforgiving god. Christianity is profoundly other-worldly with its obsession with the saviour, the saved, the end of days, the kingdom to come, the soul struggling for release, not to mention sin sin sin. Islam, a harsh, desert religion, somehow even more than the other two, is about denial, control, submission, and jihad in all its complex and contradictory manifestations and interpretations. The status of women in each religion, in a general sense, is different. Christianity gives women the most ‘wriggle-room’ from the start, but its interaction with the different cultures captured by the religion can sometimes open up that space, or close it down. The New Testament presents a patriarchal culture of course, but in the gospels women aren’t given too bad a rap. Paul of Tarsus notoriously displays some misogyny elsewhere in the NT, but it isn’t particularly specific and no detailed restrictions on women’s freedom are presented. More importantly, the dynamism of western culture has blown away many attempts to maintain the restrictions on women’s freedom dictated by Christian dogma – pace the Catholic Church. In any case, Christianity has no equivalent to Sharia Law, with its deity-given restrictions and overall fearfulness of the freedom and power of women. And neither Christianity nor Islam has the obsession with ritual and with interpretation of the deity’s very peculiar requirements that orthodox Judaism has.

To return, though, to Atran. He argues that the reason the big religions survive and thrive is precisely due to their lack of fixed propositions – which is why, he says, that we need sermons to continually update and modernise the interpretations of texts, parables, suras and the like. I’m not sure if the Khutbas of Moslem Imams serve the same purpose as priests’ sermons, but I generally agree with Atran here. The point, of course, is that though there is much leeway for interpretation, there are still boundaries, and the boundaries are different for Islam compared to Christianity, etc.

What follows is my analysis of what Atran has to say about what are, in fact, very complex and contentious matters relating to religion and social existence. Whole books could be, and of course are, devoted to this, so I’ll try not to get too bogged down. I’m using my own transcript of Atran’s interview with Mooney, slightly edited. Occasionally I can’t quite make out what Atran is saying, as he sometimes talks softly and rapidly, but I’ll do my best.

So, after his slightly over-simplified claim that these big religions are ‘neutral vessels’, Atran goes on with his definition. These religions are:

… moral frameworks that provide a transcendental moral foundation for large groups coalescing – for how else do you get genetic relatives to form large co-operative groups? They don’t have to be necessarily religious today, but it involves transcendental ideas. Take human rights, for example, that’s a crazy idea. Two hundred and fifty years ago a bunch of intellectuals in Europe decided that providence or nature made all human beings equal, endowed by their creator with rights to liberty and happiness, when the history of 200,000 years of human life had been mostly cannibalism, infanticide, murder, the suppression of minorities and women, and so [through the wars?] and social engineering, they took this crackpot idea and made it real.

I have a few not so minor quibbles to make here. Presumably Atran is using the term ‘transcendental’ in the way that I would use the term “over-arching’ – a much more neutral, and if you like, secular term. The trouble is – and he uses this term often throughout the interview – Atran uses ‘transcendental’ with deliberate rhetorical intent, taking advantage of its massive semantic load to undercut various secular concepts, in this case the ‘crackpot’ concept of human rights.

This isn’t to say that Atran objects to human rights. My guess is that he regards it as a somewhat arbitrary and unlikely concept, invented by a bunch of European intellectuals in the Enlightenment era, that just happened to catch on, and a good thing too. That’s not how I see it. It’s just much much more complex than that. So much so that I hesitate to even begin to explore it here. The germ of the concept goes back at least as far as Aristotle, and it involves the increasingly systematic study of human history, and human psychology. It involves the science of evolution, and it involves pragmatic global developments in commerce and diplomacy. Eighteenth century Enlightenment ideas had a catalytic effect, as did many developments of the scientific enlightenment of the previous century, as did the growth of democratic ideas and the concept of systematic universal education and health-care in the nineteenth century, in the west.

My point is that, though I have no problems with calling human rights a convenient fiction – nobody ‘really’ has rights as such – it’s based on a this-worldly (i.e. non-transcendental) understanding of how both individuals and societies flourish and thrive, in terms of the contract or compromise between them.

Atran goes on:

But, in general, societies that have unfalsifiable and unverifiable transcendental constructions win out over those that don’t –  I mean, Darwin talked about it as moral virtue, and said that this is responsible for the kind of patriotism, sympathy and loyalty that makes certain tribes win out over other tribes in […] competition for dominance and survival, and again, without these transcendental ideas people can’t really be blinded to [exit strategies], I mean, societies that are based on social contracts, no matter how good they are, the idea that there’s always a better deal down the line makes them liable to collapse, while these societies are much less prone to that. And there are all sorts of other things associated with these sorts of unverifiable propositions.

Presumably these ‘unfalsifiable and unverifiable transcendental constructions’ are religions, and I’ve no great objection to that characterisation, but I’m not so convinced about the positive value for ‘dominance and survival’ of these constructions. One could argue that my kind of scepticism can only flourish in a secure environment such as we have in the west, where such ‘undermining’ values as anti-nationalism and atheism can’t threaten the social cohesion of our collective prosperity and sense of superiority to non-western notions. There are just no ‘better deals down the line’, except maybe more health, wealth and happiness, commitment to which requires the very opposite of an ‘exit strategy’. In other words, western ‘social contract’ societies, in which religious belief is rapidly diminishing (outside the US), are showing no sign of collapsing, because there is no meaningful exit strategy, unless a delusional one. There is no desire or motivation to exit. We’re largely facing our demons and rejecting overly ‘idealistic’ solutions.

Perhaps my meaning will be clearer when we look at more of Atran’s remarks:

So now, the propositions, these things themselves can be interpreted, however, depending on the political and social climate of the age. Islam has been interpreted in ways that were extremely progressive at one time, and at least parts of it are extremely retrogressive, especially as concerns science for example, the position of women in the world, especially parts of it in many countries it’s extremely retrograde. But, Islam itself, I mean does it have some essence that encourages this kind of crazy violence? No, not at all – that truly is absurd, and just false.

Atran’s becoming a bit incoherent here, and maybe he expresses himself better elsewhere, but his base argument is that there’s no ‘essence’ to Islam which renders it more violent than other religions, or transcendental constructions (eg communism or fascism) for that matter. He overplays his hand, I think, when he claims that this is ‘absurd’ and obviously false. We could call this ‘the argument from petulance’. Islam does have some essential differences, I think, which makes it more able to act against women and against scientific ideas, though I agree that this is a matter of degree, and that it’s very complex. For example, the growth of Catholicism in Africa has combined with certain aspects of tribal culture and patriarchy to make African Catholic spokesmen very outspoken against homosexuality – and a recent local television program had a Moslem leader speaking up in favour of gay marriage. So, yes, there is nothing fixed in stone about Islam or Christianity with respect to human values.

The thing is that, for writers like Ayaan Hirsi Ali, and I suspect Sam Harris too, the question of ‘essentialism’ is largely academic, for right here and right now people are being targeted by Moslems (under the pressure of cultural connections or disconnections), because they are apostates, or critics, or women trying to get an education, or women dressing too ‘immodestly’, and this is causing great tension, even to the point of death and destruction here and there. In fact, Hirsi Ali, in calling for an enlightenment in the Moslem world, is backing a non-essentialist view. It’s the culture that has to change, but of course religion, with its transcendentalist, eternalist underpinnings, acts as a strong brake against cultural transformation. To engage in the battle for moderation is to battle for this-wordly, evidence-based thinking on human flourishing, against transcendentalist ideas of all kinds.

Atran, I think, relies too heavily on his notion of ‘transcendental constructions’, which he uses too widely and sweepingly, even with a degree of smugness. Let me provide one more quote from his interview, with some final comments.

But again, I don’t see anything about Islam itself… you need some kind of transcendental ideal to get people to sacrifice for genetic strangers, for these large groups. Religion is the best thing that human history has come up with, but there are other competing transcendental notions of which democratic liberalism, human rights, communism, fascism, are others, and right now the democratic-liberal-human rights thing is predominant in a large part of the world and it’s a salvation [……..] and people don’t want that or feel left in the driftwood of globalisation, they are looking for something else to give them equal power and significance.

Methinks Atran might’ve been spending too much time in the study of religious/transcendental ideas – he’s seeing everything though that perspective. I myself have written about democracy, in its various manifestations, from a sceptical perspective many times, and I’ve been critical of the over-use of the concept of rights, and so forth. It’s true enough that people can take these concepts, along with fascism or communism, to a transcendental level, making of them an unquestionable given for ‘right living’ or ‘a decent society’, but they can also be taken pragmatically and realistically, reasonably, as the most serviceable approaches to a well-functioning social order. Social evolution is moving quickly, and we can make sacrifices for genetic strangers, based on our growing understanding, as humans, of our common genetic inheritance. We’re not so much genetic strangers, perhaps, as we once thought ourselves to be. Indeed, it’s this growing understanding, a product of science, that is expanding our circle of connection beyond even the human. We need to promote this understanding as much as we can, in the teeth of transcendentalist, eternalist, other-worldly ideas about submission to deities, heavenly rewards and spiritual superiority.

the rise of unbelief in the USA

leave a comment »

reason-rally signx-large

I’m always interested in statistics about religious belief and its decline in most western countries, and I like to keep up with the latest findings, so I want to post fairly regularly about this. This time I want to focus on that toughest nut to crack, the USA.

This is, of course another area where ideology influences or even creates facts – for example, the fanatical Christian theist William Lane Craig has trotted out claims that atheists represent 5% or sometimes 3% of the US population, and thus can be dismissed with impunity. He qualifies this by saying that the rising population of the non-religiously affiliated are not necessarily atheists, etc etc. It’s a tediously trivial point. Many individuals who clearly don’t believe in supernatural entities are uncomfortable with the term ‘atheist’ – Sam Harris among them – and I can identify with that discomfort. Some prefer to identify as secularists, sceptics, humanists or whatever, and their identification with atheism can vary with the time of day – as mine sometimes does. This has little bearing on the fact that this non-believing, uncomfortable-with-labels sector of US society is increasing in number and in proportion of the total. As to Craig, as I’ve written before, he will always be in as much denial of the truth as the flat-earthers of a previous century, and I guarantee that he will be saying on his death-bed, after as long a lifetime of fanaticism as I could possibly wish for him, that the number of atheists in the USA is down to 2% or possibly less, with most of them being drooling mental defectives. He is truly the Don Quixote of evangelical Christianity.

So let me start with this point of inquiry podcast from 7 years ago. It described a 2001 study, the American Religious Identification Survey (ARIS), by a group based at the City University of New York. According to Tom Flynn, reporting on the study, the number of ‘Nones’ (people who answered the question ‘what is your religious affiliation?’ with ‘none’) increased from just under 9% in the early nineties (in fact the ARIS website puts Nones at 8.2% in 1990) to just over 14.1% in 2001. The podcast explores the implications of this shift as well as the more detailed findings of the study, which I’ll explore shortly, but first let me provide an update to this finding, because another ARIS study was carried out in 2008, which put the proportion of Nones at 15%, a considerable slowdown in growth. My own reflection on this is that possibly this reduction in the speed of growth may be partially accounted for by the incumbency of the republican party during the period. I suspect that the next survey, presumably around 2016-2017, will show an increase in growth during the Obama presidency.

Much time is spent in the podcast on why this ‘rise of the Nones’ is occurring – and I should say that other independent surveys have also reported this trend. One reason for non-affiliation with traditional religious denominations (while not necessarily disavowing ‘spirituality’) is the loosening of old authoritarian ties, begun back in the sixties and seventies and still continuing, backed by an education system that encourages the questioning and challenging of establishment thinking. In keeping with this, it’s the more liberal protestant religions that, as in Australia, have been the biggest losers over the last fifteen years or so. In the USA, though, there has been a burgeoning pentecostal movement, essentially conservative in nature, which has gained much ground at the expense of the traditional churches. They appear to constitute a backlash against the many societal changes of recent decades, adding to the well-recognised polarisation of US society.

The University of Akron in Ohio also does regular surveys on religious trends in the USA. They did one in the run-up to the 2004 election, which recorded 16% of the nationwide sample as being not religiously affiliated – essentially Nones. Different questions were asked of course – and I suppose this raises the issue of how you might write a question or a series of questions which will provide you with the biggest percentage of Nones possible, without actually resorting to threats and intimidation. That was a joke. And yet… In any case, the Akron study asked further questions of this 16% group and found that more than two thirds of them were ‘non-spiritual’, that’s to say, definitely atheist, agnostic and/or humanist. That’s to say, more than 10% of the US adult population, if this study is to be trusted are explicitly non-religious, and in fact that mark was passed a decade ago. Ten percent is a bit of a magical number for minorities in the US, as Tom Flynn explains.

So what about the most recent data? There’s not much that’s really recent that I can find. The Pew Forum on religion and public life did a survey in 2007 that found 16.1% were Nones, but they broke that percentage down differently, and it gave the explicitly non-religious less of a share than the Akron survey.  As I’ve mentioned, the 2008 ARIS survey had the Nones at 15%. However, a biannual poll called the General Social Survey, probably the one discussed at Evolutionblog recently, and treated in more detail here, had the Nones up at 20% – up from 8% in 1990. There are obvious doubts about how exactly such percentages are arrived at, but there’s surely no doubt about the direction of the trend. This survey as with others, shows that liberals are more likely to be Nones than conservatives (by a long way), the youngest adults are more likely than the oldest (also by a long way) and men are more likely than women (by a small but substantial margin). May this overall trend continue, and may I long continue to observe and report on it.

Written by stewart henderson

July 14, 2013 at 10:22 am