an autodidact meets a dilettante…

‘Rise above yourself and grasp the world’ Archimedes – attribution

Posts Tagged ‘psychology

on blogging: a personal view

leave a comment »

I have a feeling – I haven’t researched this – that the heyday of blogging is over. Even I rarely read blogs these days, and I’m a committed blogger, and have been since the mid 2000s. I tend to read books and science magazines, and some online news sites, and I listen to podcasts and watch videos – news, historical, academic, etc. 

should read more blogs. Shoulda-coulda-woulda. Even out of self-interest – reading and commenting on other blogs will drive traffic to my own, as all the advisers say. Perhaps one of the problems is that there aren’t too many blogs like mine – they tend to be personal interest or lifestyle blogs, at least going by those bloggers who ‘like’ my blog, which which gives me the distinct impression that those ‘likers’ are just trying to drive traffic to their blogs, as advised. But the thing is, I like to think of myself as a real writer, whatever that is. Or a public intellectual, ditto. 

However, I’ve never been published in a real newspaper, apart from one article 25 years ago in the Adelaide Review (the only article I’ve ever submitted to a newspaper), which led to my only published novel, In Elizabeth. But I’ve never really seen myself as a fiction writer. I’m essentially a diarist turned blogger – and that transition from diary writing to blogging was transformational, because with blogging I was able to imagine that I had a readership. It’s a kind of private fantasy of being a public intellectual.

I’ve always been inspired by my reading, thinking ‘I could do that”. Two very different writers, among many others, inspired me to keep a diary from the early 1980s, to reflect on my own experiences and the world I found myself in: Franz Kafka and Michel de Montaigne. Montaigne’s influence, I think, has been more lasting, not in terms of what he actually wrote, but his focus on the wider world, though it was Kafka that was the most immediate influence back in those youthful days, when I was still a little more self-obsessed. 

Interestingly, though, writing about the world is a self-interested project in many ways. It’s less painful, and less dangerous. I once read that the philosopher and essayist Bertrand Russell, who had attempted suicide a couple of times in his twenties, was asked about those days and how he survived them. ‘I stopped thinking about myself and thought about the world’, he responded.

I seem to recall that Montaigne wrote something like ‘I write not to find out what I think about a topic, but to create that thinking.’ I strongly identify with that sentiment. It really describes my life’s work, such as it is. Considering that, from all outside perspectives, I’m deemed a failure, with a patchy work record, a life mostly spent below the poverty line and virtually no readership as a writer, I’m objective enough and well-read enough to realise that my writing stands up pretty well against those who make a living from their works. Maybe that’s what prevents me from ever feeling suicidal.  

Writing about the world is intrinsically rewarding because it’s a lifelong learning project. Uninformed opinions are of little value, so I’ve been able to take advantage of the internet – which is surely the greatest development in the dissemination of human knowledge since the invention of writing – to embark on this lifelong learning at very little cost. I left school quite young, with no qualifications to speak of, and spent the next few years – actually decades – in and out of dead-end jobs while being both attracted and repelled by the idea of further academic study. At first I imagined myself as a legend in my lunch-time – the smartest person I knew without academic qualifications of any kind. And of course I could cite my journals as proof. These were the pre-internet days of course, so the only feedback I got was from the odd friend to whom I read or showed some piece of interest. My greatest failing, as a person rather than a writer, is my introversion. I’m perhaps too self-reliant, too unwilling or unable to join communities. The presence of others rather overwhelms me. I recall reading, in a Saul Bellow novel, of the Yiddish term trepverter – meaning the responses to conversations you only think of after the moment has passed. For me, this trepverter experience takes up much of my time, because the responses are lengthy, even never-ending. It’s a common thing, of course, Chekhov claimed that the best conversations we have are with ourselves, and Adam Smith used to haunt the Edinburgh streets in his day, arguing with himself on points of economics and probably much more trivial matters. How many people I’ve seen drifting along kerbsides, shouting and gesticulating at some invisible, tormenting adversary.

Anyway, blogging remains my destiny. I tried my hand at podcasting, even vodcasting, but I feel I’m not the most spontaneous thinker, and my voice catches in my throat due to my bronchiectasis – another reason for avoiding others. Yet I love the company of others, in an abstract sort of way. Or perhaps I should say, I like others, more than I like company – though I have had great experience in company with others. But mostly I feel constrained in company, which makes me dislike my public self. That’s why I like reading – it puts me in an idealised company with the writer. I must admit though, that after my novel was published, and also as a member of the local humanist society, I gave a few public talks or lectures, which I enjoyed immensely – I relish nothing more than being the centre of attention. So it’s an odd combo of shyness and self-confidence that often leaves me scratching my own head. 

This also makes my message an odd one. I’m an advocate of community, and the example of community-orientated bonobos, who’s also something of a loner, awkward with small-talk, wanting to meet people, afraid of being overwhelmed by them. Or of being disappointed.

Here’s an example. Back in the eighties, I read a book called Melanie. It was a collection of diary writings of a young girl who committed suicide, at age 18 as I remember. It was full of light and dark thoughts about family, friends, school and so forth. She came across as witty, perceptive, mostly a ‘normal’ teenager, but with this dark side that seemed incomprehensible to herself. Needless to say, it was an intimate, emotional and impactful reading experience. I later showed the book to a housemate, a student of literature, and his response shocked me. He dismissed it out of hand, as essentially childish, and was particularly annoyed that the girl should have a readership simply because she had suicided. He also protested, rather too much, I felt, about suicide itself, which I found revealing. He found such acts to be both cowardly and selfish. 

I didn’t argue with him, though there was no doubt a lot of trepverter going on in my head afterwards. For the record, I find suicides can’t be easily generalised, motives are multifactorial, and our control over our own actions are often more questionable than they seem. In any case human sympathy should be in abundant supply, especially for the young. 

So sometimes it feels safer to confide in an abstract readership, even a non-existent one. I’ll blog on, one post after another. 

Written by stewart henderson

March 30, 2021 at 3:40 pm

a bonobo world 26: boys and girls at work and play

leave a comment »

Emmanuelle Charpentier and Jennifer Doudna, brilliant women with great dress sense

In her introduction to The Second Sex, Simone de Beauvoir wrote this: 

.. the truth is that anyone can clearly see that humanity is split into two categories of individuals with manifestly different clothes, faces, bodies, smiles, movements, interests and occupations; these differences are perhaps superficial; perhaps they are destined to disappear. What is certain is that for the moment they exist in a strikingly obvious way.

A whole book could easily be written – some already have – to expand on this apparently mundane observation. Today in the west, or the developed world, or Anglo-American or Euro-American society (I never know quite what to call it), there are no set rules, of course, about how people should dress, or behave, or work or play, gender-wise, but there are conventions and social pressures, and I’ve noted encouraging developments, as well as their opposite.

A close female friend expressed a certain despair/disdain the other day in telling me that Dr Jill Biden, aged 69, wore stilettos for her husband’s confirmation as US President. I share that friend’s conviction that stilettos should only be used as murder weapons. In any case men only wear stilettos when in drag, which is all too rare. 

On clothing and accessories, while today’s variety is inspiring and liberating for both sexes, one still sees frustrating gender-based tendencies everywhere. Frills and furbelows have long been all the go for female formal attire, while tuxes or frock-coats are de rigueur for males, compleat with ties, bowed or straight. These traditions tend to emphasise gender differences you’d never notice in bonobos, though there is a welcome playfulness of gender-swapping attire among the elites, seldom replicated in your local bar or restaurant. 

What has constantly surprised me, as a person who spent his youth in the sixties and seventies, when déclassé jeans and t-shirts, in colourful variety, were common and pleasantly informal, is that those decades didn’t establish a trend of ambisexual dress – just as I’ve been surprised that traditional marriage didn’t get thrown out as seemed to be on the cards in those days. Marriage today appears to represent much of human ambiguity – a commitment to monogamous ideals even while recognising their limitations, even their absurdity. Conservatives argue that loyalty is a much undervalued value, but it’s always been possible to have more than one loyal friend, with benefits. Bonobos manage to have a bunch of them. Bonobos aren’t being rad, they’re just being bonobos. Which raises the question, what is it, to be humans?

David Deutsch, in The beginning of infinity, celebrates and encourages our infinite possibilities, to find solutions, to expand our outlooks, to achieve outrageously amazing things. He writes of the value of optimism over pessimism, and progress over stasis. I’m largely in agreement, but with some reservations. He has nothing to say about community, for example. Community, it seems to me, has become ever more important as change has become more rapid. As Deutsch and others have pointed out, during the many thousands of years when humans lived the hunter-gatherer life, with no doubt many variations, life simply didn’t change from generation to generation. And as long as that life was sustainable, there was little need for new developments, new hunting or grinding implements, new forms of shelter or clothing. So, nobody was out of date or old-fashioned, there were no old fuddy-duddies you wouldn’t be seen dead with. In fact, quite the opposite – the elders would have been more expert at the latest technology, developed in the previous aeon, than the youngsters, who would marvel at how those old guys’ boomerangs always came back (okay, they were never actually intended to). Given this relatively static society, it’s hardly surprising that elders were more respected, for their skills, experience and store of communal lore, than today’s nursing home denizens. And, as always, I’m aware of the multifarious nature of modern human societies, static and otherwise, to which I have little access, beyond book-larnin. Most of these societies or cultures, though, are today forced to interact with others, creating identity confusions and divided loyalties by the brainload.

Anyway, sticking with the White Anglo-Saxon ex-Protestant culture I’m familiar with, I’m a bit shocked that, despite two or more waves of feminism in the last century or so, women are still earning less than men and paying more for what I would deem unnecessary accoutrements, including hairstyles, bling, fancy tattoos, make-up and the aforementioned frills and furbelows. I recently bought a ‘men’s’ stick deodorant, which seemed to me nothing more than an anti-perspirant, and which was identical to that of my female partner, only bigger, and cheaper! These are ‘first-world issues’, of course, but they reflect, in little, an exploitation of the feminine worldwide, which seems a hard nut to crack.  

There’s of course a thing about eternal youth, in regard to women, that should be addressed. Men in their fifties don’t wear make-up, at least not the ones I know. Quite a few women I know, in their fifties, and older, also don’t wear make-up, but let’s face it, most of them do – with all the expense, as well as the time and effort, this involves. They do it, presumably, to hide the effects of gravity, though gravity always wins, as Radiohead informs us. With men, apparently, gravity lends gravitas.

I’ve often – in fact, ever since adolescence  – imagined myself as female. Mostly lesbian female, though I did have an early period of male-male attraction. So, if I did turn out female, how would I behave, appearance-wise, now that I’m in my sixties? Would I wear an op-shop jacket, t-shirt (usually with some thought-bubble printing) and chino-type trousers, as I do now? I hope so. It’s a kind of unisex outfit for academic and sciencey people, the types I’ve always aspired to be. But unfortunately, feminists have recently written of the pink/blue divide in children’s clothing that’s stronger than ever, as well as the divide in toys – fighting, racing and danger versus dancing, cuddling and beauty. This appears to be driven by manufacturers and advertisers, who, like social media moguls, seem to derive a benefit from driving their customers down wormholes of like-mindedness. Not surprisingly, social psychologists find that children benefit from being more unisex in these choices – not a matter of turning them into their opposites, but seeing dolls and trucks as others see them, and generally being more colourful. And slowly, all too slowly, we’re following this advice, and seeing more male nurses and female truck-drivers than previously. Not to mention female white supremacists sporting submachine guns – but that’s only in the US, they do things differently there. And more males working in child-care? That’s another nut to crack.


Simone de Beauvoir, Le Deuxième Sexe (1949), new translation 2009.


Written by stewart henderson

January 29, 2021 at 12:59 pm

interactional reasoning: some stray thoughts

leave a comment »


I mentioned in my first post on this topic, bumble-bees have a fast-and-frugal way of obtaining the necessary from flowers while avoiding predators, such as spiders, which is essentially about ‘assessing’ the relative cost of a false negative (sensing there’s no spider when there is) and a false positive (sensing there’s a spider when there’s not). Clearly, the cost of a false negative is likely death, but a false positive also has a cost in wasting time and energy in the search for safe flowers. It’s better to be safe than sorry, up to a point. The bees still have a job to do, which is their raison d’être. So they’ve evolved to be wary of certain rough-and-ready signs of a spider’s presence. It’s not a fool-proof system, but it ensures that false positives are a little more over-determined than false negatives, enough to ensure overall survival, at least against one particular threat. 

When I’m walking on the street and note that a smoker is approaching, I have an immediate impulse, more or less conscious, to give her a wide berth, and even cross the road if possible. I suffer from bronchiectasis, an airways condition, which is much exacerbated by smoke, dust and other particulates. So it’s an eminently reasonable decision, or impulse (or something between the two). I must admit, though, that this event is generally accompanied by feelings of annoyance and disgust, and thoughts such as ‘smokers are such losers’ – in spite of the fact than, in the long long ago, I was a smoker myself.

Such negative thoughts, though, are self-preservative in much the same way as my avoidance measures. However, they’re not particularly ‘rational’ from the perspective of the intellectualist view of reason. I would do better, of course, in an interactive setting, because I’ve learned – through interactions of a sort (such as my recent reading of Siddhartha Mukherjee’s brilliant cancer book, which in turn sent me to the website of the US Surgeon-General’s report on smoking, and through other readings on the nature of addiction) – to have a much more nuanced and informed view. Stiil, my ‘smokers are losers’ disgust and disdain is perfectly adequate for my own everyday purposes!

The point is, of course, that reason evolved first and foremost to promote our survival, but further evolved, in our highly social species, to enable us to impress and influence others. And others have develped their own sophisticated reasons to impress and influence us. It follows that the best and most fruitful reasoning comes via interactions – collaborative or argumentative, in the best sense – with our peers. Of course, as I’ve stated it here, this is a hypothesis, and it’s quite hard to prove definitively. We’re all familiar with the apparently solitary geniuses – the Newtons, Darwins and Einsteins – who’ve transformed our understanding, and those who’ve been exposed to it will be impressed with the rigour of Aristotelian and post-Aristotelian logic, and the concepts of validity and soundness as the sine qua non of good reasoning (not to mention those fearfully absolute terms, rational and irrational). Yet these supposedly solitary geniuses often admitted themselves that they ‘stood on the shoulders of giants’, Einstein often mentioned his indebtedness to other thinkers, and Darwin’s correspondence was voluminous. Science is more than ever today a collaborative or competitively interactive process. Think also of the mathematician Paul Erdős whose obsessive interest in this most rational of activities led to a record number of collaborations.

These are mostly my own off-the-cuff thoughts. I’ll return to Mercier and Sperber’s writings on the evolution of reasoning and its modular nature next time.

Written by stewart henderson

February 1, 2020 at 11:11 am

interactional reasoning: cognitive or myside bias?

leave a comment »

In the previous post on this topic, I wrote of surprise as a motivator for questioning what we think we know about our world, a shaking of complacency. In fact we need to pay attention to the unexpected, because of its greater potential for harm (or benefit) than the expected. It follows that expecting the unexpected, or at least being on guard for it, is a reasonable approach. Something which disconfirms our expectations, can teach us a lot – it might be the ugly fact that undermines a beautiful theory. So, it’s in our interest to watch out for, and even seek out, information that undermines our current knowledge – though it might be pointed out that it’s rarely the person who puts forward a theory who discovers the inconvenient data that undermines it. The philosopher Karl Popper promoted ‘falsificationism’ as a way of testing and tightening our knowledge, and it’s interesting that the very title of his influential work Conjectures and refutations speaks to an interactive approach towards reasoning and evaluating ideas. 

In The enigma of reason, Mercier and Sperber argue that confirmation bias can best be explained by the fact that, while most of our initial thinking about a topic is of the heuristic, fast-and-frugal kind, we then spend a great deal more time, when asked about our reasoning re a particular decision, developing post-hoc justifications. Psychological research has borne this out. The authors suggest that this is more a defence of the self, and of our reputation. They suggest that it’s more of a myside bias than a confirmation bias. Here’s an interesting example of the effect:

Deanna Kuhn, a pioneering scholar of argumentation and cognition, asked participants to take a stand on various social issues – unemployment, school failure and recidivism. Once the participants had given their opinion, they were asked to justify it. Nearly all participants obliged, readily producing reasons to support their point of view. But when they were asked to produce counterarguments to their own view, only 14 percent were consistently able to do so, most drawing a blank instead.

Mercier & Sperber, The enigma of reason, pp213-4

The authors give a number of other examples of research confirming this tendency, including one in which the participants were divided into two groups, one with high political knowledge and another with limited knowledge. The low-knowledge group were able to provide twice as many arguments for their view of an issue as arguments against, but the high-knowledge performed even more poorly, being unable to provide any arguments against. ‘Greater political knowledge only amplified their confirmation bias’. Again, the reason for this appears to be reputational. The more justifications you can find for your views and decisions, the more your reputation is enhanced, at least in your own mind. There seems no obvious benefit in finding arguments against yourself.

All of this seems very negative, and even disturbing. And it’s a problem that’s been known about for centuries. The authors quote a great passage from Francis Bacon’s Novum Organum:

The human understanding when it has once adopted an opinion… draws all things else to support and agree with it. And though there be a greater number and weight of instances to be found on the other side, yet these it either neglects and despises, or else by some distinction sets aside and rejects, in order that by this great and pernicious predetermination the authority of its former conclusions may remain inviolate.

Yet it isn’t all bad, as we shall see in future posts…


Hugo Mercier and Dan Sperber, The enigma of reason, 2017

Written by stewart henderson

January 29, 2020 at 1:44 pm

preliminary thoughts on reasoning and reputation

leave a comment »


In my youth I learned about syllogisms and modus ponens and modus tollens and the invalidity of arguments ad hominem and reductio ad absurdum, and valid but unsound arguments and deduction and induction and all the rest, and even wrote pages filled with ps and qs to get myself clear about it all, and then forgot about it. All that stuff was only rarely applied to everyday life, where, it seemed, our reasoning, though important, was more implicit and intuitive. What I did notice though – being a bit of a loner – was that when I did have a disagreement with someone which left a bitter taste in my mouth, I would afterwards go over the argument in my head to make it stronger, more comprehensive, more convincing and bullet-proof (and of course I would rarely get the chance to present this new and improved version). But interestingly, as part of this process, I would generally make my opponent’s argument stronger as well, even to the point of conceding some ground to her and coming to a reconciliation, out of which both of us would be reputationally enhanced.

In fact, I have to say I spend quite a bit of time having these imaginary to-and-fros, not only with ‘real people’, but often with TV pundits or politicians who’ll never know of my existence. To take another example, when many years ago I was accused of a heinous crime by a young lad to whom I was a foster-carer, I spent excessive amounts of time arguing my defence against imaginary prosecutors of fiendish trickiness, but the case was actually thrown out without my ever having, or being allowed, to say a word in a court-house, other than ‘not guilty’.

So, is all this just so much wasted energy? Well, of course not. For example, I’ve used all that reflection on the court case to give, from my perspective, a comprehensive account of what happened and why, of my view of the foster-care system and its deficiencies, of the failings of the police in the matter and so forth, to friends and interested parties, as well as in writing on my blog. And it’s the same with all the other conversations with myself – they’ve sharpened my view of the matter in hand, of people’s motivations for holding different views (or my view of their motivations), they’ve caused me to engage in research which has tightened or modified my position, and sometimes to change it altogether.

All of this is preliminary to my response to reading The enigma of reason, by Dan Sperber and Hugo Mercier, which I’m around halfway through. One of the factors they emphasise is this reputational aspect of reason. My work to justify myself in the face of a false allegation was all about restoring or shoring up my reputation, which involved not just explaining why I could not have done what I was accused of doing, but explaining why person x would accuse me of doing it, knowing I would have to contend with ‘where there’s smoke there’s fire’ views that could be put, even if nobody actually put them.

So because we’re concerned, as highly socialised creatures, with our reputations, we engage in a lot of post-hoc reasoning, which is not quite to say post-hoc rationalisation, which we tend to think of as making excuses after the fact (something we do a lot of as well). A major point that Sperber and Mercier are keen to emphasise is that we largely negotiate our way through life via pretty reliable unconscious inferences and intuitions, built up over years of experience, which we only give thought to when they’re challenged or when they fail us in some way. But of course there’s much more to their ‘new theory of human understanding’ than this. In any case much of what the book has to say makes very good sense to me, and I’ll explore this further in future posts.

Written by stewart henderson

January 20, 2020 at 2:05 pm

inference in the development of reason, and a look at intuition

leave a comment »

various more or less feeble attempts to capture intuition 

Many years ago I spent quite a bit of time getting my head around formal logic, filling scads of paper with symbols whose meanings I’ve long since forgotten, obviously through disuse.
I recognise that logic has its uses, tied with mathematics, e.g. in developing algorithms in the field of information technology, inter alia, but I can’t honestly see its use in everyday life, at least not in my own. Yet logic is generally valued as the sine qua non of proper reasoning, as far as I can see.
Again, though, in the ever-expanding and increasingly effective field of cognitive psychology, reason and reasoning as concepts are undergoing massive and valuable re-evaluation. As Hugo Mercier and Dan Sperber argue in The enigma of reason, they have benefitted (always arguably) from being taken out of the hands of logicians and (most) philosophers and examined from an evolutionary and psychological perspective. Charles Darwin read Hume on inference and reasoning and commented in his diary that scientists should consider reason as gradually developed, that’s to say as an evolved trait. So reasoning capacities should be found in other complex social mammals to varying degrees.    

An argument has been put forward that intuition is a process that fits between inference and reason, or that it represents a kind of middle ground between unconscious inference and conscious reasoning. Daniel Kahneman, for example, has postulated three cognitive systems – perception, intuition (system 1 cognition) and reasoning (system 2). Intuition, according to this hypothesis, is the ‘fast’, experience based, rule-of-thumb type of thinking that often gets us into trouble, requiring the slower ‘think again’ evaluation (which is also far from perfect) to come to the rescue. However, Mercier and Sperber argue that intuition is a vague term, defined more by what it lacks than by any defining characteristics. It appears to be a slightly more conscious process of acting or thinking by means of a set of inferences. To use a personal example, I’ve done a lot of cooking over the years, and might reasonably describe myself as an intuitive cook – I know from experience how much of this or that spice to add, how to reduce a sauce, how to create something palatable with limited ingredients and so forth. But this isn’t the product of some kind of intuitive mechanism, rather it’s the product of a set of inferences drawn from trial-and-error experience that is more or less reliable. Mercier and Sperber describe this sense of intuitiveness as a kind of metacognition, or ‘cognition about cognition’, in which we ‘intuit’ that doing this, or thinking that, is ‘about right’, as when we feel or intuit that someone is in a bad mood, or that we left our keys in room x rather than room y. This feeling lies somewhere between consciousness and unconsciousness, and each intuition might vary considerably on that spectrum, and in terms of strength and weakness. Such intuitions are certainly different from perceptions, in that they are feelings we have about something. That is, they belong to us. Perceptions, on the other hand, are largely imposed on us by the world and by our evolved receptivity to its stimuli.

All of this is intended to take us, or maybe just me, on the path towards a greater understanding of conscious reasoning. There’s a long way to go…


The enigma of reason, a new theory of human understanding, by Hugo Mercier and Dan Sperber, 2017

Thinking, fast and slow, by Daniel Kahneman, 2011

Written by stewart henderson

December 4, 2019 at 10:45 pm

nothing so simple? the gambler’s fallacy

leave a comment »

Humans are capable of reasoning, but not always or often very well. Daniel Kahneman’s famous book Thinking, fast and slow provides us with many examples, and not being much of a clear thinker myself, where probability and all that Bayesian stuff is concerned, I’ll start with something really simple before ascending, one day, to the simply simple. And not being much of a gambler, I’d never heard of the gambler’s fallacy before. It appears to be a simple and obvious fallacy, but I’m sure I can succeed in making it more confusing than it should be.

The fallacy involves believing that what has occurred before might dictate what happens in the future, in a particular context. It’s best explained by the tossing of a coin. With a fair coin, the probability of it landing tails up, on any toss, is .5, given that, in probability language, absolute certainty is given a value of 1, and no possibility at all is given 0. The key here is what I’ve italicised – the fallacy lies in believing that the coin, as if it’s a thinking being, has an interest in maintaining a result, over many tosses, of 50% tails – so that if results skew towards zero, say after 6 heads results in a row, the probability of the next toss being tails will rise above .5. 

Put another way: assuming a fair coin, the probability of it landing heads on one toss is .5. That should mean that over time, with x number of tosses, assuming x to be a very large number, the result for a heads should approach 50%. So it would seem quite reasonable, if you were keeping count, to bet on a result that brings the average closer to 50%. That’s without imagining that the coin wants to get to 50%. It just should, shouldn’t it?

The clear answer is no. There can be no influence from the past on any new coin toss. How can there be? That would be truly weird if you think about it. The overall results may approach 50%, according to the law of large numbers, but that’s independent of particular tosses. If you look at it this way, creating a dependency, you decide to bet on a pair of tosses. It could be HH, TT, HT or TH. Those are the only four options and the probability of each of them is .25 (i.e .5 x .5). So you might think that, after two heads in a row, it would be wise to bet on tails. But this bet would still have a .5 probability of succeeding, and the result HHT, taken together, would be .5 x .5 x .5, which is .125 or one eighth, the same as all the other seven results of three coin tosses. The probability doesn’t change before each toss, no matter the result of the previous toss. 

So far, so clear, but it would be hard not to be influenced into betting against a run continuing. That’s not irrational, is it? But nor is it rational, considering there’s alway a 50/50 chance with each toss. It’s just a bet. And yet… I’m reminded of Swann in a A la recherche du temps perdu, as my mind clouds over…

Written by stewart henderson

November 17, 2019 at 2:19 pm

Posted in gambling, probability

Tagged with , ,

Lessons from the Trump travesty?

leave a comment »

Consider this passage from The moral landscape, by Sam Harris:

As we better understand the brain, we will increasingly understand all of the forces – kindness, reciprocity, trust, openness to argument, respect for evidence, intuitions of fairness, impulse control, the mitigation of aggression, etc – that allow friends and strangers to collaborate successfully on the common projects of civilisation…

These are indeed, and surely, the forces, or traits, we should want in order to have the best social lives. And they involve a richly interactive relationship between the social milieu – the village, the tribe, the family, the state – and the individual brain, or person. They are also, IMHO, the sorts of traits we would hope to find in our best people – for example, our political leaders, regardless of which political faction they represent.

Now consider those traits in respect of one Donald Trump. It should be obvious to any reasoning observer that he is deficient in all of them. And I mean deficient to a jaw-dropping, head-scratching degree. So there are two questions worth posing here.

  1. How could a person, so obviously deficient in all of the traits we would consider vital to the project of civilisation, have been created in a country that prides itself on being a leader of the free, democratic, civilised world?
  2. How could such a person rise to become the President of that country – which, whether or not you agree with its self-description of its own moral worth, is undoubtedly the world’s most economically and militarily powerful nation, and a world-wide promoter of democracy (in theory if not always in practice)?

I feel for Harris, whose book was published in 2010, well before anyone really had an inkling of what was to come. In The moral landscape he argues for objective moral values, or moral realism, but you don’t have to agree with his general philosophical position to acknowledge that the advancement of civilisation is largely dependent on the above-quoted traits. But of course, not everyone acknowledges this, or has ever given a thought to the matter. It’s probably true that most people, in the USA and elsewhere, don’t give a tinker’s cuss about the advancement of civilisation.

So the general answer to question one is easy enough, even if the answer in any particular case requires detailed knowledge. I don’t have such knowledge of the family background, childhood and even pre-natal influences that formed Trump’s profoundly problematic character, but reasonable inferences can be made, I think. For example, one of Trump’s most obvious traits is his complete disregard for the truth. To give one trivial example among thousands, he recently described Meghan Markle, now the Duchess of Sussex, as ‘nasty’, in a televised interview. In another televised interview, very shortly afterwards, he denied saying what he was clearly recorded as saying. This regular pattern of bare-faced lying, without any concern about being found out, confronted by his behaviour, or suffering consequences, says something. It says that he has rarely if ever been ‘corrected’ for breaking this commandment, and, very likely, has been rewarded for it from earliest childhood – this reward being likely in the form of amusement, acclamation, and encouragement in this practice. Since, as we know, Trump was a millionaire before he was old enough to pronounce the word, the son of a self-possessed, single-minded property shark, who bestowed on the child a thousand indications of his own importance, it’s more than likely that he grew up in a bubble-world in which self-interest and duplicity were constantly encouraged and rewarded, a world of extreme materialism, devoid of any intellectual stimulation. This is the classic ‘spoilt child’ I’ve already referred to. Often, when a child like this has to stand up on his own feet, his penchant for lying, his contempt for the law and his endless attention-seeking will get him into legal trouble, but Trump appears to have stayed under the wing of his father for much longer than average. His father bailed him out time and time again when he engaged in dumb business deals, until he learned a little more of the slyness of white-collar crime (including learning how to steal from his father). His father’s cronies in the crooked business and legal world would also have taught him much.

Trump is surely a clear-cut case of stunted moral development, the darling child who was encouraged, either directly or though observation of the perverse world of white-collar crime that surrounded him, to listen to no advice but his own, to have devotees rather than friends, and to study and master every possible form of exploitation available to him. Over time, he also realised that his habit of self-aggrandisement could be turned to advantage, and that it would continue to win people, in ever greater numbers, if effectively directed. Very little of this, of course, was the result of what psychologists describe as system 2 thinking – and it would be fascinating to study Trump’s brain for signs of activity in the prefrontal cortex – it was more about highly developed intuitions about what he could get away with, and who he could impress with his bluster.

Now, I admit, all of this is somewhat speculative. Given Trump’s current fame, there will doubtless be detailed biographies written about his childhood and formative years, if they haven’t been written already. My point here is that, given the environment of absurd and dodgy wealth to be found in small pockets of US society, and given the ‘greed is good’ mantra that many Americans (and of course non-Americans) swallow like the proverbial kool-aid, it isn’t so surprising that white-collar crime isn’t dealt with remotely adequately, and that characters like Trump dot the landscape, like pus-oozing pimples on human skin. In fact there are plenty of people, rich and poor alike, who would argue that tax evasion shouldn’t even be a crime… while also arguing that the USA, unlike every other western democracy, can’t afford universal medicare.

So that’s a rough-and-ready answer to question one. Question two has actually been addressed in a number of previous posts, but I’ll address it a little differently here.

The USA is, I think, overly obsessed with the individual. It’s a hotbed of libertarianism, an ideology entirely based on the myth of individualism and ‘individual freedom’, and it’s no surprise that Superman, Batman and most other super-heroes were American products. It’s probable that a sizeable section of Trump’s base see him in ‘superhero’ terms, someone not cut in the mould of Washington politicians, someone larger than life, someone almost from outer space in that he talks and acts differently from normal human beings let alone politicians. This makes him exciting and enlivening – like a comic book. And they’re happy to go along for the ride regardless of whether their lives are improved.

I must admit, though, that I’m mystified when I hear Trump supporters still saying ‘he’s done so much for our country’, when it’s fairly clear to me that, apart from cruelly mistreating asylum-seekers, he’s done little other than tweet insults and inanities and cheat at golf. The massive neglect of every aspect of federal government under his ‘watch’ will take decades to repair, and the question of whether the USA will ever recover from the tragi-comedy of this presidency is hard to answer.

But as to how Trump was ever allowed to become President, it’s all about a dangerously flawed political system, one that has too few safeguards against the simplistic populism that the ancient Greek philosophers railed against 2500 years ago. Unabashed elitists, they were deeply concerned that ‘the mob’ would be persuaded by a charismatic blowhard who promised everything and delivered nothing – or, worse than nothing, disaster. They were concerned because they witnessed it in their lifetime.

The USA today is sadly lacking in those safeguards. It probably thought the safeguards were adequate, until Trump came along. For example, it was expected – among gentlemen, so to speak – that successful candidates would present their tax returns, refuse to turn the Presidency to their own profit, support their own intelligence services and justice department, treat long-time allies as allies and long-time adversaries as adversaries, and, in short, display at least some of the qualities I’ve quoted from Harris at the top of this post.

The safeguards, however, need to go much further than this, IMHO. The power of the Presidency needs to be sharply curtailed. A more distributed, collaborative and accountable system needs to be developed, a team-based system (having far more women in leadership positions would help with this), not a system which separates the President/King and his courtiers/administration from congress/parliament. Pardoning powers, veto powers, special executive powers, power to select unelected officials to high office, power to appoint people to the judiciary – all of these need to be reined in drastically.

Of course, none of this is likely to happen in the near future – and I still believe blood will flow before Trump is heaved out of office. But I do hope that the silver lining to the cloud of this presidency is that, in the long term, a less partisan, less individual-based federal system will be the outcome of this Dark Age.

Written by stewart henderson

June 14, 2019 at 5:00 pm

the self and its brain: free will encore

leave a comment »

yeah, right

so long as, in certain regions, social asphyxia shall be possible – in other words, and from a yet more extended point of view, so long as ignorance and misery remain on earth, books like this cannot be useless.

Victor Hugo, author’s preface to Les Miserables

Listening to the Skeptics’ Guide podcast for the first time in a while, I was excited by the reporting on a discovery of great significance in North Dakota – a gigantic graveyard of prehistoric marine and other life forms precisely at the K-T boundary, some 3000 kms from where the asteroid struck. All indications are that the deaths of these creatures were instantaneous and synchronous, the first evidence of mass death at the K-T boundary. I felt I had to write about it, as a self-learning exercise if nothing else.

But then, as I listened to other reports and talking points in one of SGU’s most stimulating podcasts, I was hooked by something else, which I need to get out of the way first. It was a piece of research about the brain, or how people think about it, in particular when deciding court cases. When Steven Novella raised the ‘spectre’ of ‘my brain made me do it’ arguments, and the threat that this might pose to ‘free will’, I knew I had to respond, as this free will stuff keeps on bugging me. So the death of the dinosaurs will have to wait.

The more I’ve thought about this matter, the more I’ve wondered how people – including my earlier self – could imagine that ‘free will’ is compatible with a determinist universe (leaving aside quantum indeterminacy, which I don’t think is relevant to this issue). The best argument for this compatibility, or at least the one I used to use, is that, yes, every act we perform is determined, but the determining factors are so mind-bogglingly complex that it’s ‘as if’ we have free will, and besides, we’re ‘conscious’, we know what we’re doing, we watch ourselves deciding between one act and another, and so of course we could have done otherwise.

Yet I was never quite comfortable about this, and it was in fact the arguments of compatibilists like Dennett that made me think again. They tended to be very cavalier about ‘criminals’ who might try to get away with their crimes by using a determinist argument – not so much ‘my brain made me do it’ as ‘my background of disadvantage and violence made me do it’. Dennett and other philosophers struck me as irritatingly dismissive of this sort of argument, though their own arguments, which usually boiled down to ‘you can always choose to do otherwise’ seemed a little too pat to me. Dennett, I assumed, was, like most academics, a middle-class silver-spoon type who would never have any difficulty resisting, say, getting involved in an armed robbery, or even stealing sweets from the local deli. Others, many others, including many kids I grew up with, were not exactly of that ilk. And as Robert Sapolsky points out in his book Behave, and as the Dunedin longitudinal study tends very much to confirm, the socio-economic environment of our earliest years is largely, though of course not entirely, determinative.

Let’s just run though some of this. Class is real, and in a general sense it makes a big difference. To simplify, and to recall how ancient the differences are, I’ll just name two classes, the patricians and the plebs (or think upper/lower, over/under, haves/have-nots).

Various studies have shown that, by age five, the more plebby you are (on average):

  • the higher the basal glucocorticoid levels and/or the more reactive the glucocorticoid stress response
  • the thinner the frontal cortex and the lower its metabolism
  • the poorer the frontal function concerning working memory, emotion regulation , impulse control, and executive decision making.

All of this comes from Sapolsky, who cites all the research at the end of his book. I’ll do the same at the end of this post (which doesn’t mean I’ve analysed that research – I’m just a pleb after all. I’m happy to trust Sapolski). He goes on to say this:

moreover , to achieve equivalent frontal regulation, [plebeian] kids must activate more frontal cortex than do [patrician] kids. In addition, childhood poverty impairs maturation of the corpus collosum, a bundle of axonal fibres connecting the two hemispheres and integrating their function. This is so wrong foolishly pick a poor family to be born into, and by kindergarten, the odds of your succeeding at life’s marshmallow tests are already stacked against you.

Behave, pp195-6

Of course, this is just the sort of ‘social asphyxia’ Victor Hugo was at pains to highlight in his great work. You don’t need to be a neurologist to realise all this, but the research helps to hammer it home.

These class differences are also reflected in parenting styles (and of course I’m always talking in general terms here). Pleb parents and ‘developing world’ parents are more concerned to keep their kids alive and protected from the world, while patrician and ‘developed world’ kids are encouraged to explore. The patrician parent is more a teacher and facilitator, the plebeian parent is more like a prison guard. Sapolsky cites research into parenting styles in ‘three tribes’: wealthy and privileged; poorish but honest (blue collar); poor and crime-ridden. The poor neighbourhood’s parents emphasised ‘hard defensive individualism’ – don’t let anyone push you around, be tough. Parenting was authoritarian, as was also the case in the blue-collar neighbourhood, though the style there was characterised as ‘hard offensive individualism’ – you can get ahead if you work hard enough, maybe even graduate into the middle class. Respect for family authority was pushed in both these neighbourhoods. I don’t think I need to elaborate too much on what the patrician parenting (soft individualism) was like – more choice, more stimulation, better health. And of course, ‘real life’ people don’t fit neatly into these categories, there are an infinity of variants, but they’re all determining.

And here’s another quote from Sapolsky on research into gene/environment interactions.

Heritability of various aspects of cognitive development is very high (e.g. around 70% for IQ) in kids from [patrician] families but is only around 10% in [plebeian] kids. Thus patrician-ness allows the full range of genetic influences on cognition to flourish, whereas plebeian settings restrict them. In other words, genes are nearly irrelevant to cognitive development if you’re growing up in awful poverty – poverty’s adverse affects trump the genetics.

Behave, p249

Another example of the huge impact of environment/class, too often underplayed by ivory tower philosophers and the silver-spoon judiciary.

Sapolsky makes some interesting points, always research-based of course, about the broader environment we inhabit. Is the country we live in more communal or more individualistic? Is there high or low income inequality? Generally, cultures with high income inequality have less ‘social capital’, meaning levels of trust, reciprocity and cooperation. Such cultures/countries generally vote less often and join fewer clubs and mutual societies. Research into game-playing, a beloved tool of psychological research, shows that individuals from high inequality/low social capital countries show high levels of bullying and of anti-social punishment (punishing ‘overly’ generous players because they make other players look bad) during economic games. They tend, in fact, to punish the too-generous more than they punish actual cheaters (think Trump).

So the determining factors into who we are and why we make the decisions we do range from the genetic and hormonal to the broadly cultural. A couple have two kids. One just happens to be conventionally good-looking, the other not so much. Many aspects of their lives will be profoundly affected by this simple difference. One screams and cries almost every night for her first twelve months or so, for some reason (and there are reasons), the other is relatively placid over the same period. Again, whatever caused this difference will likely profoundly affect their life trajectories. I could go on ad nauseam about these ‘little’ differences and their lifelong effects, as well as the greater differences of culture, environment, social capital and the like. Our sense of consciousness gives us a feeling of control which is largely illusory.

It’s strange to me that Dr Novella seems troubled by ‘my brain made me do it’, arguments, because in a sense that is the correct, if trivial, argument to ‘justify’ all our actions. Our brains ‘make us’ walk, talk, eat, think and breathe. Brains R Us. And not even brains – octopuses are newly-recognised as problem-solvers and tool-users without even having brains in the usual sense – they have more of a decentralised nervous system, with nine mini-brains somehow co-ordinating when needed. So ‘my brain made me do it’ essentially means ‘I made me do it’, which takes us nowhere. What makes us do things are the factors shaping our brain processes, and they have nothing to do with ‘free will’, this strange, inexplicable phenomenon which supposedly lies outside these complex but powerfully determining factors but is compatible with it. To say that we can do otherwise is just saying – it’s not a proof of anything.

To be fair to Steve Novella and his band of rogues, they accept that this is an enormously complex issue, regarding individual responsibility, crime and punishment, culpability and the like. That’s why the free will issue isn’t just a philosophical game we’re playing. And lack of free will shouldn’t by any means be confused with fatalism. We can change or mitigate the factors that make us who we are in a huge variety of ways. More understanding of the factors that bring out the best in us, and fostering those factors, is what is urgently required.

just thought I’d chuck this in

Research articles and reading

Behave, Robert Sapolsky, Bodley Head, 2017

These are just a taster of the research articles and references used by Sapolsky re the above.

C Heim et al, ‘Pituitary-adrenal and autonomic responses to stress in women after sexual and physical abuse in childhood’

R J Lee et al ‘CSF corticotrophin-releasing factor in personality disorder: relationship with self-reported parental care’

P McGowan et al, ‘Epigenetic regulation of the glucocorticoid receptor in human brain associates with childhood abuse’

L Carpenter et al, ‘Cerebrospinal fluid corticotropin-releasing factor and perceived early life stress in depressed patients and healthy control subjects’

S Lupien et al, ‘Effects of stress throughout the lifespan on the brain, behaviour and cognition’

A Kusserow, ‘De-homogenising American individualism: socialising hard and soft individualism in Manhattan and Queens’

C Kobayashi et al ‘Cultural and linguistic influence on neural bases of ‘theory of mind”

S Kitayama & A Uskul, ‘Culture, mind and the brain: current evidence and future directions’.

etc etc etc

Written by stewart henderson

April 23, 2019 at 10:53 am

What’s up with Trump’s frontal cortex? part 2

leave a comment »

Before going on with my thoughts about little Donnie’s brain, I want to address two pieces of relevant reading I’ve done lately. 

First, the short article by ‘Neuroskeptic’ entitled ‘Don’t blame Trump’s brain‘. Now, as anyone who’s read much of my blog knows, I consider myself a skeptic and a supporter of the skeptical community. However, I don’t entirely agree with Neuroskeptic here. First, describing people’s attempt to work out Trump’s psychology or neurology from his words and actions as ‘Trumphrenology’ is a silly put-down. In fact, all psychiatric conditions are diagnosed on the basis of observed words and acts – duh, what else? Unless there’s a brain injury or genetic abnormality. So the medical terms used to describe Trump and others do have some validity, though I agree that ‘medicalising’ the problem of Trump can be counter-productive, as it is with many ‘conditions’ which have appeared recently to describe the spectra of human behaviour. It’s more important, in my view, to recognise Trump as a career criminal than to put a psycho-neurological label on him. Then again, as someone who doesn’t believe in free will, the brain that makes Trump be Trump is of some interest to me. Second, Neuroskeptic describes the arguments of those who attribute medical conditions to people on the basis of behaviour as ‘circular’. This is false. Behaviour is more than s/he thinks it is. When we try to understand the brain, we look at how it behaves under particular conditions. According to Neuroskeptic ‘it’s rarely useful to try to understand a behaviour in neuroscientific terms’. If that’s true, then the monumental 700-page book Behave, by Robert Sapolsky, one of the world’s leading neurobiologists, was largely a waste of time. Third, Neuroskeptic questions the validity and ethics of Trump ‘diagnosis-at-a-distance’. This is absurd. Over the past two years alone, Americans have been subjected to several thousand tweets, hundreds of televised speeches and comments, and the day-to-day actions of the lad in the White House. Unless they make a real effort to switch off, most Americans can’t help knowing more about Trump than they do about just about anyone in their intimate circle. Where’s the distance?

Second, on The dangerous case of Donald Trump, by 27 people working in the field of mental health. I’ve not read it, but I’ve read the ‘summary’, attributed to Bandy X Lee, the contributing editor of the full book, though I prefer to believe that Lee, a respected Yale professor of psychology, had no hand in writing this summary, which is, syntactically speaking, the worst piece of published writing I’ve ever read in my life (I say this as a language teacher). I prefer to believe it was written by an intellectually disabled computer. I’m sure the full book is far far better, but still I’m amused by the variety of conditions Trump may be suffering from – ADHD, malignant narcissism, borderline personality disorder, psychopathology, sociopathology, delusional disorder, generalised anxiety disorder etc (OK that last one is what most reasoning Americans are supposedly suffering from because of Trump). All of this is a bit of a turn-off, so I won’t be reading the book. I tend to agree with what Neuroskeptic seems to be inferring – that we don’t need a psychiatric diagnosis as an excuse to get rid of Trump – his obviously asinine remarks, his insouciant cruelty and his general incompetence are in full view. His criminality should have seen him in jail long ago, for a long time. Further, the idea that a diagnosis of mental instability could lead to invoking the 25th amendment is absurd on its face. Anyone who’s read the 25th amendment should see that. I don’t see any evidence that Trump’s condition is deteriorating – he’s been consistently deceitful and profoundly incurious throughout his life. That means he was elected as a fuckwitted dickhead. Don’t blame Trump, blame those who elected him. And blame the lack of checks and balances that should make it impossible for just anyone to become President. Democracy does have its flaws after all.

So what are the patterns of behaviour that might lead to a diagnosis, which then might be confirmed neurologically – if, for example we were to apply a tranquillising dart to this bull-in-a-china-shop’s voluminous rump, then tie him up and probe his frontal and pre-frontal regions and their connections, in response to questioning and other fun stimuli (I’d love to be in charge of that operation)?

I’ll first list some notable Trump behaviours and traits, recognised by the cognoscenti, without suggesting anything about their relation to frontal cortex disfunction.

  • A tendency, or need, to take credit for everything positive that happens within his particular environment, and a concomitant tendency, or need, to blame anyone else for everything negative occurring in that environment
  • a winner/loser mentality, in which losers are often members of ‘losing’ cultures, sub-groups or entities (blacks, latinos, women, the failing NYT) and winners are judged in terms of pure power and wealth (Putin, Kim, Manafort, Fred Trump)
  • lack of focus in speeches and an inability to listen; generally a very limited attention span 
  • frequently cited temper tantrums
  • lack of empathy and consideration for others, to quite an extreme degree, close to solipsism
  • emphasis on compliance and deference from others, inability to deal with criticism
  • extreme lack of curiosity
  • lack of interest in or understanding of ethics
  • lack of interest in or understanding of concepts of truth/falsehood 
  • extreme need to be the centre of attention

I think that’s a good start. As to how these traits map on to psychopathological states and then onto cortical development, I won’t be so psychopathological as to provide clear answers. Most people I’ve spoken to suggest malignant narcissism as a pretty good fit for his behaviour – perhaps due to its all-encompassing vagueness? Wikipedia describes it as ‘a hypothetical, experimental diagnostic category’, which doesn’t sound promising, and it isn’t recognised in the Diagnostic and Statistical Manual of Mental Disorders (DSM-IV-TR), though narcissistic personality disorder (NPD) is. I suppose that some people want to particularly emphasise Trump’s malignancy, but I think NPD is bad enough. Here’s the Wikipedia description, drawn from the latest DSM and other sources:

a personality disorder with a long-term pattern of abnormal behavior characterized by exaggerated feelings of self-importance, excessive need for admiration, and a lack of empathy. Those affected often spend a lot of time thinking about achieving power or success, or on their appearance. They often take advantage of the people around them. The behaviour typically begins by early adulthood, and occurs across a variety of social situations.

Now, I came up with the Trump behavioural traits before I read this description, I swear. I think the fit is pretty exact, but it’s clear that those responsible for diagnosing someone with NPD don’t do so on the basis of brain scans. I’ve explored enough neurology to fairly safely say that NPD, psychopathy and many other psychiatric conditions just can’t, as yet be reliably correlated with neurological connections or lack thereof. Even schizophrenia, one of the more treatable psychotic conditions, is rarely described in terms of brain function, and is diagnosed entirely through behaviour patterns. 

Having said this, all of these conditions are entirely about brain function, and in Trump’s case, brain development since early childhood. We’ll never get to know what precisely is up with Trump’s frontal cortex, partly because we’ll never get that tranquilising dart to penetrate his fat arse and to then practise Nazi-like experimentation… sorry to dwell so lovingly on this. And partly because, in spite of the galloping advances we’re making in neurology, we’re not at the knowledge level, I suspect, of being able to pinpoint connections between the amygdalae, the hypothalamus, the hippocampus and the various regions of the frontal and prefrontal cortex. I plan to do more research and reading on this, and there may be another blog piece in the offing. However, one thing I can say – Trump probably isn’t a psychopath. Psychopaths tend not to have temper tantrums – their emotional responses are minimal, rather than being exacerbated by life’s slings and arrows, and their violence is instrumental rather than impassioned. Their amygdalae – the founts of aggression and anxiety – are correspondingly reduced. Doesn’t sound like Trump.

Again, though reflection on Trump’s curious psyche may be intrinsically interesting, it’s his crimes that should do him in. As I’ve said before, the fact that he’s not currently in custody is a disgrace to the American criminal and legal system. His fixer is facing a jail term, and in pleading guilty to two felony counts of campaign finance violations, has fingered Trump as the Mr Big of that operation. Those authorities who have not arrested him should themselves be facing legal action for such criminal negligence. And of course other crimes will be highlighted by the Mueller team in the near future, though such scams as Trump University should have seen him jailed long ago. Others have suffered lengthy prison terms for less. But that’s the USA, the greatest democracy in the greatest, free-est and fairest nation in the history of the multiverse. Maybe such overweening pride deserves this fall…

Written by stewart henderson

October 12, 2018 at 4:20 pm