an autodidact meets a dilettante…

‘Rise above yourself and grasp the world’ Archimedes – attribution

Archive for the ‘psychology’ Category

a bonobo world 26: boys and girls at work and play

leave a comment »

Emmanuelle Charpentier and Jennifer Doudna, brilliant women with great dress sense

In her introduction to The Second Sex, Simone de Beauvoir wrote this: 

.. the truth is that anyone can clearly see that humanity is split into two categories of individuals with manifestly different clothes, faces, bodies, smiles, movements, interests and occupations; these differences are perhaps superficial; perhaps they are destined to disappear. What is certain is that for the moment they exist in a strikingly obvious way.

A whole book could easily be written – some already have – to expand on this apparently mundane observation. Today in the west, or the developed world, or Anglo-American or Euro-American society (I never know quite what to call it), there are no set rules, of course, about how people should dress, or behave, or work or play, gender-wise, but there are conventions and social pressures, and I’ve noted encouraging developments, as well as their opposite.

A close female friend expressed a certain despair/disdain the other day in telling me that Dr Jill Biden, aged 69, wore stilettos for her husband’s confirmation as US President. I share that friend’s conviction that stilettos should only be used as murder weapons. In any case men only wear stilettos when in drag, which is all too rare. 

On clothing and accessories, while today’s variety is inspiring and liberating for both sexes, one still sees frustrating gender-based tendencies everywhere. Frills and furbelows have long been all the go for female formal attire, while tuxes or frock-coats are de rigueur for males, compleat with ties, bowed or straight. These traditions tend to emphasise gender differences you’d never notice in bonobos, though there is a welcome playfulness of gender-swapping attire among the elites, seldom replicated in your local bar or restaurant. 

What has constantly surprised me, as a person who spent his youth in the sixties and seventies, when déclassé jeans and t-shirts, in colourful variety, were common and pleasantly informal, is that those decades didn’t establish a trend of ambisexual dress – just as I’ve been surprised that traditional marriage didn’t get thrown out as seemed to be on the cards in those days. Marriage today appears to represent much of human ambiguity – a commitment to monogamous ideals even while recognising their limitations, even their absurdity. Conservatives argue that loyalty is a much undervalued value, but it’s always been possible to have more than one loyal friend, with benefits. Bonobos manage to have a bunch of them. Bonobos aren’t being rad, they’re just being bonobos. Which raises the question, what is it, to be humans?

David Deutsch, in The beginning of infinity, celebrates and encourages our infinite possibilities, to find solutions, to expand our outlooks, to achieve outrageously amazing things. He writes of the value of optimism over pessimism, and progress over stasis. I’m largely in agreement, but with some reservations. He has nothing to say about community, for example. Community, it seems to me, has become ever more important as change has become more rapid. As Deutsch and others have pointed out, during the many thousands of years when humans lived the hunter-gatherer life, with no doubt many variations, life simply didn’t change from generation to generation. And as long as that life was sustainable, there was little need for new developments, new hunting or grinding implements, new forms of shelter or clothing. So, nobody was out of date or old-fashioned, there were no old fuddy-duddies you wouldn’t be seen dead with. In fact, quite the opposite – the elders would have been more expert at the latest technology, developed in the previous aeon, than the youngsters, who would marvel at how those old guys’ boomerangs always came back (okay, they were never actually intended to). Given this relatively static society, it’s hardly surprising that elders were more respected, for their skills, experience and store of communal lore, than today’s nursing home denizens. And, as always, I’m aware of the multifarious nature of modern human societies, static and otherwise, to which I have little access, beyond book-larnin. Most of these societies or cultures, though, are today forced to interact with others, creating identity confusions and divided loyalties by the brainload.

Anyway, sticking with the White Anglo-Saxon ex-Protestant culture I’m familiar with, I’m a bit shocked that, despite two or more waves of feminism in the last century or so, women are still earning less than men and paying more for what I would deem unnecessary accoutrements, including hairstyles, bling, fancy tattoos, make-up and the aforementioned frills and furbelows. I recently bought a ‘men’s’ stick deodorant, which seemed to me nothing more than an anti-perspirant, and which was identical to that of my female partner, only bigger, and cheaper! These are ‘first-world issues’, of course, but they reflect, in little, an exploitation of the feminine worldwide, which seems a hard nut to crack.  

There’s of course a thing about eternal youth, in regard to women, that should be addressed. Men in their fifties don’t wear make-up, at least not the ones I know. Quite a few women I know, in their fifties, and older, also don’t wear make-up, but let’s face it, most of them do – with all the expense, as well as the time and effort, this involves. They do it, presumably, to hide the effects of gravity, though gravity always wins, as Radiohead informs us. With men, apparently, gravity lends gravitas.

I’ve often – in fact, ever since adolescence  – imagined myself as female. Mostly lesbian female, though I did have an early period of male-male attraction. So, if I did turn out female, how would I behave, appearance-wise, now that I’m in my sixties? Would I wear an op-shop jacket, t-shirt (usually with some thought-bubble printing) and chino-type trousers, as I do now? I hope so. It’s a kind of unisex outfit for academic and sciencey people, the types I’ve always aspired to be. But unfortunately, feminists have recently written of the pink/blue divide in children’s clothing that’s stronger than ever, as well as the divide in toys – fighting, racing and danger versus dancing, cuddling and beauty. This appears to be driven by manufacturers and advertisers, who, like social media moguls, seem to derive a benefit from driving their customers down wormholes of like-mindedness. Not surprisingly, social psychologists find that children benefit from being more unisex in these choices – not a matter of turning them into their opposites, but seeing dolls and trucks as others see them, and generally being more colourful. And slowly, all too slowly, we’re following this advice, and seeing more male nurses and female truck-drivers than previously. Not to mention female white supremacists sporting submachine guns – but that’s only in the US, they do things differently there. And more males working in child-care? That’s another nut to crack.


Simone de Beauvoir, Le Deuxième Sexe (1949), new translation 2009.


Written by stewart henderson

January 29, 2021 at 12:59 pm

20: bonobo and human families, early childhood and free will

leave a comment »

ye olde nuclear family, and its enclosures

The bonobo reproduction rate is low, as is ours these days, though for different reasons. Bonobos don’t tend to go all the way, while humans have contraception even for naughty catholics. Muslim scholars seem a little confused about the issue, but are generally more accepting than their catholic counterparts. As to children, humans are rather more possessive about them than bonobos. Bonobo females are largely in charges of the kids, collectively, and paternity is unknown and undisputed. Think about how that would play out in human society, which for millennia has been largely patriarchal, patrilineal and even primogenitive. 

This doesn’t mean male bonobos are hostile to kids, as it’s generally a caring and sharing society, and besides, humouring the kids is a good way of winning favours from their mothers and others. Think of how that would be as a kid – you wouldn’t just be able to run to dad when mum’s mad at you, you’d have any number of adults to run to. You’d also have a range of adults to learn from, to identify with, to consider as role models, as well as to play off against each other. 

Modern, supposedly advanced human society is very different. We live in separate, securitised houses, in nuclear families – ideally mum, dad and 2⅓ kids – with a garden surrounded by a high fence, if we’re ‘lucky’. The grandparents live across town, or in another country, or a nursing home. Visitors are vetted by smartphone. Of course often it’s a single-parent situation, usually mum, and the odd long- or short-lived boyfriend. She works, so the kids spend a lot of time in day-care, meeting other kids and sharing with them one or two adults, who don’t get too close, wary of being accused of funny business. Rarely are these adults male. Still it’s pretty good, lots of toys and games and things to make and do, all in primary colours, but it’s not every day because it’s too expensive, you (the kid) sometimes get shipped around to aunties or friends or assorted baby-sitters, or you get switched to a new centre, with a whole bunch of strangers, or a kid you really like just disappears. But mostly you’re at home with your stupid brother, until school days arrive and you have to wear a uniform, and mum fusses over you and makes you feel nervous and watchful about whether you look different from the other kids, in a good or bad way. And you learn stuff and you like or hate the teacher and you start competing with the other kids and start thinking about how smart or dumb you are. 

Modern human life is pretty regimented. At a certain tender age you go to school where you learn first of all the basics of numeracy and literacy as the first steps toward being civilised. You also learn about rules and regulations, time management and the difference between work and play. Thrown into the school pool of humanity, you’re driven to contemplate and come to terms with variety: fat and skinny, pretty and ugly, noisy and quiet, smart and dumb, friend and enemy and all in between. You learn to make judgments, who to trust, who to avoid, and what to pay attention to. The prefrontal cortex, that amazing human asset, is continuing on its great connective journey, as you negotiate yourself between the formal and the free, between regimentation and independence. 

Yet all the research tells us that most of those judgments you make at school, and which you vaguely remember having made, are actually the product of that growth period before the laying down of memories, distorted or otherwise. And that includes your ability to make effective judgments. 

In the first few years of life, we form more than a million new neural connections every second. In fact, so many that after this surge of connections comes a period of pruning for order and efficiency. But this early period of development requires stimulation, which comes in infinite varieties of ways, including, of course, the bonobo way (and I don’t mean tree-climbing and chomping on insects), the chimp way (watching adult males battling it out), the Tiwi Islander way or the Netherlands royal family way or whatever. And much of this guided stimulation forms our behaviour for the rest of our lives. And the lack of it can reduce our capacities for a lifetime, in spite of subsequent kindness and care, as the notorious case of the Romanian orphans kept in horrendous states of neglect under the Ceauşescu regime has shown, though interestingly, some 20% of those adopted orphans have grown up showing little or no damage. Stimulation can come from within as well as without, and neglect has many variables. 

It stands to reason that we as individuals have little or no control over our development in this crucial period. Which brings me to the issue of free will. Philosophers have traditionally argued for free will on the ‘could have done otherwise’ basis. I could have drunk tea rather than coffee with brekky this morning (though I invariably drink coffee). I could’ve chosen x from the restaurant menu instead of y. So often these trivial examples are given, when it’s screamingly obvious that you don’t get to choose your parents, your genetic inheritance, your early childhood environment, the country or period you were born into, or even the species you were born as (I could’ve snuffed out your brief candle by treading on you in this morning’s walk). Given these restraints on your freedom, restaurant choices surely pale into insignificance. 

But let’s stick with humanity. I won’t go into the neurological underpinnings of the argument against free will (as if I could), but if we treat no free will as a given, then the consequences for humanity, vis-à-vis our handling of crime and punishment, are stark, as  the neurologist and primatologist Robert Sapolsky points out in the penultimate chapter of his book Behave, entitled ‘Biology, the criminal justice system, and (oh, why not?) free will’. This is a vital issue for me, in terms of a more caring and sharing bonoboesque society, so I’ll reserve it for another essay, or two, or more.  


InBrief: The Science of Early Childhood Development

Robert Sapolsky, Behave: the biology of humans at our best and worst. 2017


Written by stewart henderson

January 6, 2021 at 12:43 pm

a bonobo world? 6 – cultural dynamism, females, families and inhibitions

leave a comment »

the nuclear family – actually modern, not traditional

Most broadly, culture is defined as the ideas, customs, and social behaviour of a particular people or society. No culture is static, though it may seem so when looked at from the viewpoint of a more dynamic culture. But why are some cultures more dynamic than others?

The Bronowski comment on taking ‘the first step on the ascent of rational knowledge’ echoes in my head when I reflect on this question. But ‘rational knowledge’ strikes a false note, as there isn’t any knowledge that is irrational. And the most essential thing that any animal, human or otherwise, must know is what to do to survive. Every species that has survived for any length of time has obtained that knowledge, and in a dynamic culture, one faced with external threats and challenges, both cultural and environmental, that knowledge must continue to grow. That’s the key to our ever-changing culture – social evolution rather than the kind of physical adaptations described in The Origin of Species. That is why, for example, we have belatedly come to realise that women deserve as much opportunity, to be educated, to be productive, and to be leaders in any field they choose to enter. It is why Bronowski’s ‘Ascent of Man’ series, with its more or less exclusively male examples of strength and aptitude, seems cringeworthy after only a few decades. 

The argument of course goes that man, like the Latin, homo, is simply a generic term for the species, and we (i.e women) should just get over it. The origins of the words woman and female are complex, but surely it’s clear that they are add-ons to the words man and male, afterthoughts like the woman in the Bible created from a man’s rib. In French, the word femme appears to be quite different from homme, but femme means wife as well as woman, the implication being that one’s wife is also one’s woman. No such implication exists for the word homme. The cultural implications of our everyday terminology continue to be impactful, and awareness of these implications is more important, I feel, than artificial changing of the language, helpful though this may be. 

Of course, no environment is static either, and animals need to be quick to adapt to new environmental threats. The paleontological record is full of species that failed in this regard. Arguably, we may do so too, if the threat is too overwhelming, but surely nothing is currently in the offing, in spite of some doomsayers. The global warming we’re currently experiencing, for example, is far less threatening to our superabundant species than was the Toba eruption of 70,000 years ago, during the last ice age (though its effects, too, are disputed). Global warming is an existential threat, however, for many other species, already pushed to the brink by deforestation, overfishing and other human activities. Yet many will say that our ingenious species – by which they generally mean the dominant culture within our species – is even better at finding solutions than creating problems. And there are many good news stories, even in relation to those other species that we keep threatening. This is indeed the ray of hope, for our species and for others. It’s my view that, if we succeed in the future, it will be because we have gradually become more compassionate, more inclusive, more frugal and more collaborative, without losing the adventurous, questing, scientific spirit that has made us so successful. 

In describing this possible future I’ll strive to be realistic and evidence-based, and that’s where the example of bonobos comes in, for this description of a future humanity fits loosely the bonobo society – without quite the scientific spirit of course. I will not be idealising bonobo society, but there are increasing problems in our culture (and note that I’m always talking about those of ‘western’ or westernised nations – western Europe, the USA, Australia and Canada – but also Japan, Korea and Taiwan) – problems relating to family, work, resources and government – that might benefit from our understanding of cultures, and species, we feel we have transcended, and the bonobo way of life is a prime example of this. 

The modern human family is more or less nuclear, indeed like the nucleus inside a cell, though we call it a house, or a home. The walls of the house are like a semi-permeable membrane, with doors and windows through which nutrients and chemicals can be funneled, and of course information about the outside world arrives via books, magazines and, increasingly, electronic devices. Of course, some of these families are more functional and happy than others, and a child’s early fate is a matter of luck in this respect. Extended families – grandparents and cousins who live within walking distance – have become rarer, as have long-term neighbours and lifelong friends, due to the increasing mobility of modern life. In my own case, growing up under a seriously dysfunctional parental situation, and separated by migration from the extended family 15,000 kilometres away, I was grateful for a deeper connection to the outside world resulting from books, of which our home always had an abundance. One book which made a deep impression on me in my early teens was Children of the Dream, by Bruno Bettelheim. Of course, I came to the book with a particular hope that there were better ways of raising children than what I’d experienced, so I was bound to see it in a positive light. Regardless of the reality of the kibbutz experiment, what I found in the book’s descriptions opened up for me other options, including richer, more varied and positive relations with elders as well as peers, and a wider sense of belonging than I was experiencing. Trust, acceptance, and a nurturing of challenge and growth, these were the values that meant most to me, and which I found missing both at home and in the school environment I’d been thrown into. Yet it’s also true, or quite likely, that certain events and experiences in my early life, largely hidden from myself, have made it difficult for me to trust and to connect in positive ways. The Dunedin Multidisciplinary Health and Development Study, a longitudinal study that has been carried out over 50 years now, provides solid evidence of the overwhelming influence of early childhood on subsequent personal development, noting that personality types are established early on in life. My own self-diagnosed type – and the study describes five – is ‘reserved’, bordering on ‘inhibited’. The latter can be a serious problem, which the Japanese describe as hikikimori, roughly translated as ‘acute social withdrawal’, though the problem is hardly confined to Japanese youth. I think, however, I’ve been saved from this acute state by the world of books and ideas, which I love to discuss, when I can bring myself to get out there and do so. 


Bruno Bettelheim, Children of the Dream, 1969.

Dunedin Study Findings: The Importance of Identifying Personality Types at a Young Age, by Kirsteen McLay-Knopp

Written by stewart henderson

November 3, 2020 at 12:04 pm

interactional reasoning: modularity

leave a comment »

all explained

Mercier and Sperber write a lot about modules and modularity in their book on interactional reasoning and its evolution. I’ve passed this over as I find the concepts difficult and I’m not sure if understanding reasoning as a module, if it fits that description, is essential to the thesis about interactional reasoning and its superiority to the intellectualist model. However, as an autodidact who hates admitting intellectual defeat, I want to use this blog to fully understand stuff for its own sake – and I generally find the reward is worth the pain.

Modules and modularity are introduced in chapter 4 of The enigma of reason. The idea is that there’s a kind of inferential mechanism that we share with other species – something noted, more or less, by David Hume centuries ago. A sort of learning instinct, as argued by bird expert Peter Marler, but taken further in our species, as suggested by Stephen Pinker in The language instinct, and by other cognitive psychologists. 

This requires us to think more carefully about the term ‘instinct’. Marler saw it as ‘an evolved disposition to acquire a given type of knowledge’, such as songs for birds and language for humans. We’ve found that we have evolved predispositions to recognise faces, for example, and that there’s a small area in the inferior temporal lobes called the fusiform face area that plays a vital role in face recognition. 

However reasoning is surely more conceptual than perceptual. Interestingly, though, in learning how to do things ‘the right way’, that’s to say, normative behaviour, children often rely on perceptual cues from adults. When shown the ‘right way’ to do something by a person they trust, in a teacherly sort of way (this is called ostensive demonstration), an infant will tend to do it that way all the time, even though there may be many other perfectly acceptable ways to perform that act. They then try to get others to conform to this ostensively demonstrated mode of action. This suggests, perhaps, an evolved disposition for norm identification and acquisition. 

Face recognition, norm acquisition and other even more complex activities, such as reading, are gradually being hooked up to specific areas of the brain by researchers. They’re described as being on an instinct-expertise continuum, and according to Mercier and Sperber:

[they] are what in biology might typically be called modules: they are autonomous mechanisms with a history, a function, and procedures appropriate to this function. They should be viewed as components of larger systems to which they each make a distinct contribution. Conversely, the capacities of a modular system cannot be well explained without identifying its modular components and the way they work together.

A close reading of this passage should suggest to us that reasoning is one of those larger systems informed by many other mechanisms. The mind, according to the authors, is an articulated system of modules. The neuron is a module, as is the brain. The authors suggest that this is, at the very least, the most useful working hypothesis. Cognitive modules, in particular, need not be innate, but can harness biologically evolved modules for other purposes.

I’m not sure how much that clarifies, though it has helped me, for what it’s worth. And that’s all I’ll be posting on interactional reasoning, for now

Written by stewart henderson

February 6, 2020 at 5:29 pm

interactional reasoning: some stray thoughts

leave a comment »


I mentioned in my first post on this topic, bumble-bees have a fast-and-frugal way of obtaining the necessary from flowers while avoiding predators, such as spiders, which is essentially about ‘assessing’ the relative cost of a false negative (sensing there’s no spider when there is) and a false positive (sensing there’s a spider when there’s not). Clearly, the cost of a false negative is likely death, but a false positive also has a cost in wasting time and energy in the search for safe flowers. It’s better to be safe than sorry, up to a point. The bees still have a job to do, which is their raison d’être. So they’ve evolved to be wary of certain rough-and-ready signs of a spider’s presence. It’s not a fool-proof system, but it ensures that false positives are a little more over-determined than false negatives, enough to ensure overall survival, at least against one particular threat. 

When I’m walking on the street and note that a smoker is approaching, I have an immediate impulse, more or less conscious, to give her a wide berth, and even cross the road if possible. I suffer from bronchiectasis, an airways condition, which is much exacerbated by smoke, dust and other particulates. So it’s an eminently reasonable decision, or impulse (or something between the two). I must admit, though, that this event is generally accompanied by feelings of annoyance and disgust, and thoughts such as ‘smokers are such losers’ – in spite of the fact than, in the long long ago, I was a smoker myself.

Such negative thoughts, though, are self-preservative in much the same way as my avoidance measures. However, they’re not particularly ‘rational’ from the perspective of the intellectualist view of reason. I would do better, of course, in an interactive setting, because I’ve learned – through interactions of a sort (such as my recent reading of Siddhartha Mukherjee’s brilliant cancer book, which in turn sent me to the website of the US Surgeon-General’s report on smoking, and through other readings on the nature of addiction) – to have a much more nuanced and informed view. Stiil, my ‘smokers are losers’ disgust and disdain is perfectly adequate for my own everyday purposes!

The point is, of course, that reason evolved first and foremost to promote our survival, but further evolved, in our highly social species, to enable us to impress and influence others. And others have develped their own sophisticated reasons to impress and influence us. It follows that the best and most fruitful reasoning comes via interactions – collaborative or argumentative, in the best sense – with our peers. Of course, as I’ve stated it here, this is a hypothesis, and it’s quite hard to prove definitively. We’re all familiar with the apparently solitary geniuses – the Newtons, Darwins and Einsteins – who’ve transformed our understanding, and those who’ve been exposed to it will be impressed with the rigour of Aristotelian and post-Aristotelian logic, and the concepts of validity and soundness as the sine qua non of good reasoning (not to mention those fearfully absolute terms, rational and irrational). Yet these supposedly solitary geniuses often admitted themselves that they ‘stood on the shoulders of giants’, Einstein often mentioned his indebtedness to other thinkers, and Darwin’s correspondence was voluminous. Science is more than ever today a collaborative or competitively interactive process. Think also of the mathematician Paul Erdős whose obsessive interest in this most rational of activities led to a record number of collaborations.

These are mostly my own off-the-cuff thoughts. I’ll return to Mercier and Sperber’s writings on the evolution of reasoning and its modular nature next time.

Written by stewart henderson

February 1, 2020 at 11:11 am

interactional reasoning: cognitive or myside bias?

leave a comment »

In the previous post on this topic, I wrote of surprise as a motivator for questioning what we think we know about our world, a shaking of complacency. In fact we need to pay attention to the unexpected, because of its greater potential for harm (or benefit) than the expected. It follows that expecting the unexpected, or at least being on guard for it, is a reasonable approach. Something which disconfirms our expectations, can teach us a lot – it might be the ugly fact that undermines a beautiful theory. So, it’s in our interest to watch out for, and even seek out, information that undermines our current knowledge – though it might be pointed out that it’s rarely the person who puts forward a theory who discovers the inconvenient data that undermines it. The philosopher Karl Popper promoted ‘falsificationism’ as a way of testing and tightening our knowledge, and it’s interesting that the very title of his influential work Conjectures and refutations speaks to an interactive approach towards reasoning and evaluating ideas. 

In The enigma of reason, Mercier and Sperber argue that confirmation bias can best be explained by the fact that, while most of our initial thinking about a topic is of the heuristic, fast-and-frugal kind, we then spend a great deal more time, when asked about our reasoning re a particular decision, developing post-hoc justifications. Psychological research has borne this out. The authors suggest that this is more a defence of the self, and of our reputation. They suggest that it’s more of a myside bias than a confirmation bias. Here’s an interesting example of the effect:

Deanna Kuhn, a pioneering scholar of argumentation and cognition, asked participants to take a stand on various social issues – unemployment, school failure and recidivism. Once the participants had given their opinion, they were asked to justify it. Nearly all participants obliged, readily producing reasons to support their point of view. But when they were asked to produce counterarguments to their own view, only 14 percent were consistently able to do so, most drawing a blank instead.

Mercier & Sperber, The enigma of reason, pp213-4

The authors give a number of other examples of research confirming this tendency, including one in which the participants were divided into two groups, one with high political knowledge and another with limited knowledge. The low-knowledge group were able to provide twice as many arguments for their view of an issue as arguments against, but the high-knowledge performed even more poorly, being unable to provide any arguments against. ‘Greater political knowledge only amplified their confirmation bias’. Again, the reason for this appears to be reputational. The more justifications you can find for your views and decisions, the more your reputation is enhanced, at least in your own mind. There seems no obvious benefit in finding arguments against yourself.

All of this seems very negative, and even disturbing. And it’s a problem that’s been known about for centuries. The authors quote a great passage from Francis Bacon’s Novum Organum:

The human understanding when it has once adopted an opinion… draws all things else to support and agree with it. And though there be a greater number and weight of instances to be found on the other side, yet these it either neglects and despises, or else by some distinction sets aside and rejects, in order that by this great and pernicious predetermination the authority of its former conclusions may remain inviolate.

Yet it isn’t all bad, as we shall see in future posts…


Hugo Mercier and Dan Sperber, The enigma of reason, 2017

Written by stewart henderson

January 29, 2020 at 1:44 pm

interactional reasoning and confirmation bias – introductory

with one comment

I first learned about confirmation bias, and motivated reasoning, through my involvement with skeptical movements and through the Skeptics’ Guide to the Universe (SGU) podcast. As has been pointed out by the SGU and elsewhere, this confirmation bias, this strong tendency to acknowledge and support views, about any topic, that confirm our own, and to dismiss or avoid listening to views from the opposite side, is a feature of liberal and conservative thought in equal measure, as well as being as much a feature of highly credentialed public intellectuals’ thought as it is for the thinking of your average unlearned sot. The problem of confirmation bias, this ‘problem in our heads’, has been blamed for the current social media maladies we supposedly suffer from, creating increasingly partisan echo-chambers in which we allow ourselves, or are ‘driven by clicks’, to be shut off from opposing views and arguments.

But is confirmation bias quite the bogey it’s generally claimed to be? Is it possibly an evolved feature of our reasoning? This raises fundamental questions about the very nature of what we call reason, and how and why it evolved in the first place. Obviously I’m not going to be able to deal with this Big Issue in the space of the short blog pieces I’ve been writing recently, so it’ll be covered by a number of posts. And, just as obviously, my questioning of confirmation bias hasn’t sprung from my own somewhat limited genius – it pains me to admit – but from some current reading material.

The enigma of reason: a new theory of human understanding, by research psychologists Hugo Mercier and Dan Sperber, is a roolly important and timely piece of work, IMHO. So important that I launch into any attempt to summarise it with much trepidation. Anyway, their argument is that reasoning is largely an interactive tool, and evolved as such. They contrast the interactive view of reason with the ‘intellectualist’ view, which begins with Aristotle and his monumentally influential work on logic and logical fallacies. So with that in mind, they tackle the issue of confirmation bias in chapter 11 of their book, entitled ‘Why is reason biased?’

The authors begin the chapter with a cautionary tale, of sorts. Linus Pauling, winner of two Nobel Prizes and regarded by his peers as perhaps the most brilliant biochemist of the 20th century, became notoriously obsessed with the healing powers of vitamin C, in spite of mounting evidence to the contrary, raising the question as to how such a brilliant mind could get it so wrong. And perhaps a more important question – if such a mind could be capable of such bias, what hope is there for the rest of us?

So the authors look more closely at why bias occurs. Often it’s a matter of ‘cutting costs’, that is, the processing costs of cognition. An example is the use of the ‘availability heuristic’, which Daniel Kahneman writes about in Thinking fast and slow, where he also describes it as WYSIWTI (what you see is what there is). If, because you work in a hospital, you see many victims of road accidents, you’re liable to over-estimate the number of road accidents that occur in general. Or, because most of your friends hold x political views, you’ll be biased towards thinking that more people hold x political views than is actually the case. It’s a kind of fast and lazy form of inferential thinking, though not always entirely unreliable. Heuristics in general are described as ‘fast and frugal’ ways of thinking, which save a lot in cognitive cost while losing a little in reliability. In fact, as research has shown (apparently) sometimes heuristics can be more reliable than pains-taking, time-consuming analysis of a problem.

One piece of research illustrative of fast-and-frugal cognitive mechanisms involves bumble-bees and their strategies to avoid predators (I won’t give the details here). Why not? Reasoning as an evolved mechanism is surely directed first and foremost at our individual survival. To be more preservative than right. It follows that some such mechanism, whether we call it reasoning or not, exists in more or less complex form in more or less complex organisms. It also follows from this reasoning-for-survival outlook, that we pay far more attention to something surprising that crops up in our environment than routine stuff. As the authors point out:

Even one-year-old babies expect others to share their surprise. When they see something surprising, they point toward it to share their surprise with nearby adults. And they keep pointing until they obtain the proper reaction or are discouraged by the adults’ lack of reactivity.

Mercier & Sperber, The enigma of reason, p210

Needless to say, the adults’ reactions in such an everyday situation are crucial for the child – she learns that what surprised her is perhaps not so surprising, or is pleasantly surprising, or is dangerous, etc. All of this helps us in fast-and-frugal thinking from the very start.

Surprises – events and information that violates our expectations – are always worth paying attention to, in everyday life, for our survival, but also in our pursuit of accurate knowledge of the world, aka science. More about that, and confirmation bias, in the next post.


The enigma of reason: a new theory of human understanding, by Hugo Mercier & Dan Sperber, 2017

Written by stewart henderson

January 28, 2020 at 2:13 pm

preliminary thoughts on reasoning and reputation

leave a comment »


In my youth I learned about syllogisms and modus ponens and modus tollens and the invalidity of arguments ad hominem and reductio ad absurdum, and valid but unsound arguments and deduction and induction and all the rest, and even wrote pages filled with ps and qs to get myself clear about it all, and then forgot about it. All that stuff was only rarely applied to everyday life, where, it seemed, our reasoning, though important, was more implicit and intuitive. What I did notice though – being a bit of a loner – was that when I did have a disagreement with someone which left a bitter taste in my mouth, I would afterwards go over the argument in my head to make it stronger, more comprehensive, more convincing and bullet-proof (and of course I would rarely get the chance to present this new and improved version). But interestingly, as part of this process, I would generally make my opponent’s argument stronger as well, even to the point of conceding some ground to her and coming to a reconciliation, out of which both of us would be reputationally enhanced.

In fact, I have to say I spend quite a bit of time having these imaginary to-and-fros, not only with ‘real people’, but often with TV pundits or politicians who’ll never know of my existence. To take another example, when many years ago I was accused of a heinous crime by a young lad to whom I was a foster-carer, I spent excessive amounts of time arguing my defence against imaginary prosecutors of fiendish trickiness, but the case was actually thrown out without my ever having, or being allowed, to say a word in a court-house, other than ‘not guilty’.

So, is all this just so much wasted energy? Well, of course not. For example, I’ve used all that reflection on the court case to give, from my perspective, a comprehensive account of what happened and why, of my view of the foster-care system and its deficiencies, of the failings of the police in the matter and so forth, to friends and interested parties, as well as in writing on my blog. And it’s the same with all the other conversations with myself – they’ve sharpened my view of the matter in hand, of people’s motivations for holding different views (or my view of their motivations), they’ve caused me to engage in research which has tightened or modified my position, and sometimes to change it altogether.

All of this is preliminary to my response to reading The enigma of reason, by Dan Sperber and Hugo Mercier, which I’m around halfway through. One of the factors they emphasise is this reputational aspect of reason. My work to justify myself in the face of a false allegation was all about restoring or shoring up my reputation, which involved not just explaining why I could not have done what I was accused of doing, but explaining why person x would accuse me of doing it, knowing I would have to contend with ‘where there’s smoke there’s fire’ views that could be put, even if nobody actually put them.

So because we’re concerned, as highly socialised creatures, with our reputations, we engage in a lot of post-hoc reasoning, which is not quite to say post-hoc rationalisation, which we tend to think of as making excuses after the fact (something we do a lot of as well). A major point that Sperber and Mercier are keen to emphasise is that we largely negotiate our way through life via pretty reliable unconscious inferences and intuitions, built up over years of experience, which we only give thought to when they’re challenged or when they fail us in some way. But of course there’s much more to their ‘new theory of human understanding’ than this. In any case much of what the book has to say makes very good sense to me, and I’ll explore this further in future posts.

Written by stewart henderson

January 20, 2020 at 2:05 pm

What is inference?

leave a comment »

Don’t believe everything you read

What are you inferring?

So am I to infer from this you’re not interested?

What does inferring actually mean? What is it to ‘infer’? Does it require language? Can the birds and the bees do it? We traditionally associate inference with philosophy, which talks of deductive inference. For example, here’s a quote from Blackwell’s dictionary of cognitive science:

Inferences are made when a person (or machine) goes beyond available evidence to form a conclusion. With a deductive inference, this conclusion always follows the stated premises. In other words, if the premises are true, then the conclusion is valid. Studies of human efficiency in deductive inference involves conditional reasoning problems which follow the “if A, then B” format.

So according to this definition, only people, and machines constructed by people, can do it, deductively or otherwise. However, psychologists have pretty thoroughly demolished this view in recent years. In ‘Understanding Inference’, section 2 of their book The enigma of reason, cognitive psychologists Hugo Mercier and Dan Sperber explore our developing view of the concept.

Inference is largely based on experience. Think of Pavlov and his dogs. In his famous experiment he created an inferential association in the dogs’ minds between a bell and dinner. Hearing the bell thus set off salivation in expectation of food. The bell didn’t cause the salivation (or it wasn’t the ultimate cause), the connection was in the mind of the dog. The hearing of the bell set off a basic thought process which brought on the salivation. The dog inferred from experience, manipulated by the experimenter, that food was coming.

Mercier and Sperber approvingly quote David Hume’s common sense ideas about inference and its widespread application. Inference, he recognised, was a much more basic and universal tool than reason, and it was a necessary part of the toolkit of any sentient being. ‘Animals’, he wrote, ‘are not guided in these inferences by reasoning: Neither are children: Neither are the generality of mankind, in their ordinary actions and conclusions. Neither are philosophers themselves, who, in all the active parts of life, are, in the main, the same with the vulgar…. Nature must have provided some other principle, of more ready, and more general use and application; nor can an operation of such immense consequence in life, as that of inferring effects from causes, be trusted to the uncertain process of reasoning and argumentation’.

This is a lovely example of Humean skepticism, which flies in the face of arid logicalism, and recognises that the largely unconscious process of inference, which we would now recognise as a product of evolution, a basic survival mechanism, is more reliable in everyday life than the most brilliantly constructed logical systems.

The point is that we make inferences more or less constantly, and mostly unconsciously. The split-second decisions made in sport, for example, are all made, if not unconsciously, then with an automaticity not attributable to reason. And most of our life is lived with a similar lack of deep reflection, from inference to inference, like every other animal. Inference, then, to quote Mercier and Sperber’s gloss on Hume, is simply ‘the extraction of new information from information already available, whatever the process’. It’s what helps us slip the defender and score a goal in soccer, or prompts us to check the batteries when the remote stops working, or moves us to look forward to break-time when we smell coffee. It’s also what wags your dog’s tail when she hears familiar footsteps approaching the house.

There’s a lot more to be said, of course…

Written by stewart henderson

December 3, 2019 at 9:53 pm

Bayesian probability, sans maths (mostly)

leave a comment »

Bayesian stuff – it gets more complicated, apparently

Okay time to get back to sciency stuff, to try to get my head around things I should know more about. Bayesian statistics and probability have been brought to the periphery of my attention many times over the years, but my current slow reading of Daniel Kahneman’s Thinking fast and slow has challenged me to master it once and for all (and then doubtless to forget about it forevermore).

I’ve started a couple of pieces on this topic in the past week or so, and abandoned them along with all hope of making sense of what is no doubt a doddle for the cognoscenti, so I clearly need to keep it simple for my own sake. The reason I’m interested is because critics and analysts of both scientific research and political policy-making often complain that Bayesian reasoning is insufficiently utilised, to the detriment of such activities. I can’t pretend that I’ll be able to help out though!

So Thomas Bayes was an 18th century English statistician who left a theorem behind in his unpublished papers, apparently underestimating its significance. The person most responsible for utilising and popularising Bayes’ work was the French polymath Pierre-Simon Laplace. The theorem, or rule, is captured mathematically thusly:

{\displaystyle P(A\mid B)={\frac {P(B\mid A)P(A)}{P(B)}}}

where A and B are events, and P(B), that is, the probability of event B, is not equal to zero. In statistics, the probability of an event’s occurrence ranges from 0 to 1 – meaning zero probability to total certainty.

I do, at least, understand the above equation, which, wordwise, means that the probability of A occurring, given that B has occurred, is equal to the probability of B occurring, given that A has occurred, multiplied by the probability of A’s occurrence, all divided by the probability of B’s occurrence. However, after tackling a few video mini-lectures on the topic I’ve decided to give up and focus on Kahneman’s largely non-mathematical treatment with regard to decision-making. The theorem, or rule, presents, as Kahneman puts it, ‘the logic of how people should change their mind in the light of evidence’. Here’s how Kahneman first describes it:

Bayes’ rule specifies how prior beliefs… should be combined with the diagnosticity of the evidence, the degree to which it favours the hypothesis over the alternative.

D Kahneman, Thinking fast and slow, p154

In the most simple example – if you believe that there’s a 65% chance of rain tomorrow, you really need to believe that there’s a 35% chance of no rain tomorrow, rather than any alternative figure. That seems logical enough, but take this example re US Presidential elections:

… if you believe there’s a 30% chance that candidate x will be elected President, and an 80% chance that he’ll be re-elected if he wins first time, then you must believe that the chances that he will be elected twice in a row are 24%.

This is also logical, but not obvious to a surprisingly large percentage of people. What appears to ‘throw’ people is a story, a causal narrative. They imagine a candidate winning, somewhat against the odds, then proving her worth in office and winning easily next time round – this story deceives them into defying logic and imagining that the chance of her winning twice in a row is greater than that of winning first time around – which is a logical impossibility. Kahneman places this kind of irrationalism within the frame of system 1 v system 2 thinking – roughly equivalent to intuition v concentrated reasoning. His solution to the problem of this kind of suasion-by-story is to step back and take greater stock of the ‘diagnosticity’ of what you already know, or what you have predicted, and how it affects any further related predictions. We’re apparently very bad at this.

There are many examples throughout the book of failure to reason effectively from information about base rates, often described as ‘base-rate neglect’. A base rate is a statistical fact which should be taken into account when considering a further probability. For example, when given information about the character of a a fictional person T, information that was deliberately designed to suggest he was stereotypical of a librarian, research participants gave the person a much higher probability of being a librarian rather than a farmer, even though they knew, or should have known, that the number of persons employed as farmers was higher by a large factor than those employed as librarians (the base rate of librarians in the workforce). Of course the degree to which the base rate was made salient to participants affected their predictions.

Here’s a delicious example of the application, or failure to apply, Bayes’ rule:

A cab was involved in a hit-and-run at night. Two cab companies, Green Cabs and Blue Cabs, operate in the city. You’re given the following data:

– 85% of the cabs in the city are Green, 15% are Blue.

– A witness identified the cab as Blue. The court tested the reliability of the witness under the circumstances that existed on the night of the accident and concluded that the witness correctly identified each one of the two colours 80% of the time and failed 20% of the time.

What is the probability that the car involved in the accident was Blue rather than Green?

D Kahneman, Thinking fast and slow, p166

It’s an artificial scenario, granted, but if we accept the accuracy of those probabilities, we can say this: given that the base rate of Blue cars is 15%, and the probability of the witness identifying the car accurately is 80%, we have this figure for the dividend – (.15/.85) x (.8/.2) =.706. Dividing this by the range of probabilities plus the dividend (1.706) gives approximately 41%.

So how close were the research participants to this figure? Most participants ignored the statistical data – the base rates – and gave the figure of 80%. They were more convinced by the witness. However, when the problem was framed differently, by providing causal rather than statistical data, participants’ guesses were more accurate. Here’s the alternative presentation of the scenario:

You’re given the following data:

– the two companies operate the same number of cabs, but Green cabs are involved in 85% of accidents

– the information about the witness is the same as previously presented

The mathematical result is the same, but this time the guesses were much closer to the correct figure. The difference lay in the framing. Green cabs cause accidents. That was the fact that jumped out, whereas in the first scenario, the fact that most clearly jumped out was that the witness identified the offending car as Blue. The statistical data in scenario 1 was largely ignored. In the second scenario, the witness’s identification of the Blue car moderated the tendency to blame the Green cars, whereas in scenario 1 there was no ‘story’ about Green cars causing accidents and the blame shifted almost entirely to the Blue cars, based on the witness’s story. Kahneman named his chapter about this tendency ‘Causes trump statistics’.

So there are causal and statistical base rates, and the lesson is that in much of our intuitive understanding of probability, we simply pay far more attention to causal base rates, largely to our detriment. Also, our causal inferences tend to be stereotyped, so that only if we are faced with surprising causal rates, in particular cases and not presented statistically, are we liable to adjust our probabilistic assessments. Kahneman presents some striking illustrations of this in the research literature. Causal information creates bias in other areas of behaviour assessment too, of course, as in the phenomenon of regression to the mean, but that’s for another day, perhaps.

Written by stewart henderson

August 27, 2019 at 2:52 pm