an autodidact meets a dilettante…

‘Rise above yourself and grasp the world’ Archimedes – attribution

Archive for the ‘psychology’ Category

interactional reasoning: modularity

leave a comment »

all explained

Mercier and Sperber write a lot about modules and modularity in their book on interactional reasoning and its evolution. I’ve passed this over as I find the concepts difficult and I’m not sure if understanding reasoning as a module, if it fits that description, is essential to the thesis about interactional reasoning and its superiority to the intellectualist model. However, as an autodidact who hates admitting intellectual defeat, I want to use this blog to fully understand stuff for its own sake – and I generally find the reward is worth the pain.

Modules and modularity are introduced in chapter 4 of The enigma of reason. The idea is that there’s a kind of inferential mechanism that we share with other species – something noted, more or less, by David Hume centuries ago. A sort of learning instinct, as argued by bird expert Peter Marler, but taken further in our species, as suggested by Stephen Pinker in The language instinct, and by other cognitive psychologists. 

This requires us to think more carefully about the term ‘instinct’. Marler saw it as ‘an evolved disposition to acquire a given type of knowledge’, such as songs for birds and language for humans. We’ve found that we have evolved predispositions to recognise faces, for example, and that there’s a small area in the inferior temporal lobes called the fusiform face area that plays a vital role in face recognition. 

However reasoning is surely more conceptual than perceptual. Interestingly, though, in learning how to do things ‘the right way’, that’s to say, normative behaviour, children often rely on perceptual cues from adults. When shown the ‘right way’ to do something by a person they trust, in a teacherly sort of way (this is called ostensive demonstration), an infant will tend to do it that way all the time, even though there may be many other perfectly acceptable ways to perform that act. They then try to get others to conform to this ostensively demonstrated mode of action. This suggests, perhaps, an evolved disposition for norm identification and acquisition. 

Face recognition, norm acquisition and other even more complex activities, such as reading, are gradually being hooked up to specific areas of the brain by researchers. They’re described as being on an instinct-expertise continuum, and according to Mercier and Sperber:

[they] are what in biology might typically be called modules: they are autonomous mechanisms with a history, a function, and procedures appropriate to this function. They should be viewed as components of larger systems to which they each make a distinct contribution. Conversely, the capacities of a modular system cannot be well explained without identifying its modular components and the way they work together.

A close reading of this passage should suggest to us that reasoning is one of those larger systems informed by many other mechanisms. The mind, according to the authors, is an articulated system of modules. The neuron is a module, as is the brain. The authors suggest that this is, at the very least, the most useful working hypothesis. Cognitive modules, in particular, need not be innate, but can harness biologically evolved modules for other purposes.

I’m not sure how much that clarifies, though it has helped me, for what it’s worth. And that’s all I’ll be posting on interactional reasoning, for now

Written by stewart henderson

February 6, 2020 at 5:29 pm

interactional reasoning: some stray thoughts

leave a comment »

wateva

I mentioned in my first post on this topic, bumble-bees have a fast-and-frugal way of obtaining the necessary from flowers while avoiding predators, such as spiders, which is essentially about ‘assessing’ the relative cost of a false negative (sensing there’s no spider when there is) and a false positive (sensing there’s a spider when there’s not). Clearly, the cost of a false negative is likely death, but a false positive also has a cost in wasting time and energy in the search for safe flowers. It’s better to be safe than sorry, up to a point. The bees still have a job to do, which is their raison d’être. So they’ve evolved to be wary of certain rough-and-ready signs of a spider’s presence. It’s not a fool-proof system, but it ensures that false positives are a little more over-determined than false negatives, enough to ensure overall survival, at least against one particular threat. 

When I’m walking on the street and note that a smoker is approaching, I have an immediate impulse, more or less conscious, to give her a wide berth, and even cross the road if possible. I suffer from bronchiectasis, an airways condition, which is much exacerbated by smoke, dust and other particulates. So it’s an eminently reasonable decision, or impulse (or something between the two). I must admit, though, that this event is generally accompanied by feelings of annoyance and disgust, and thoughts such as ‘smokers are such losers’ – in spite of the fact than, in the long long ago, I was a smoker myself.

Such negative thoughts, though, are self-preservative in much the same way as my avoidance measures. However, they’re not particularly ‘rational’ from the perspective of the intellectualist view of reason. I would do better, of course, in an interactive setting, because I’ve learned – through interactions of a sort (such as my recent reading of Siddhartha Mukherjee’s brilliant cancer book, which in turn sent me to the website of the US Surgeon-General’s report on smoking, and through other readings on the nature of addiction) – to have a much more nuanced and informed view. Stiil, my ‘smokers are losers’ disgust and disdain is perfectly adequate for my own everyday purposes!

The point is, of course, that reason evolved first and foremost to promote our survival, but further evolved, in our highly social species, to enable us to impress and influence others. And others have develped their own sophisticated reasons to impress and influence us. It follows that the best and most fruitful reasoning comes via interactions – collaborative or argumentative, in the best sense – with our peers. Of course, as I’ve stated it here, this is a hypothesis, and it’s quite hard to prove definitively. We’re all familiar with the apparently solitary geniuses – the Newtons, Darwins and Einsteins – who’ve transformed our understanding, and those who’ve been exposed to it will be impressed with the rigour of Aristotelian and post-Aristotelian logic, and the concepts of validity and soundness as the sine qua non of good reasoning (not to mention those fearfully absolute terms, rational and irrational). Yet these supposedly solitary geniuses often admitted themselves that they ‘stood on the shoulders of giants’, Einstein often mentioned his indebtedness to other thinkers, and Darwin’s correspondence was voluminous. Science is more than ever today a collaborative or competitively interactive process. Think also of the mathematician Paul Erdős whose obsessive interest in this most rational of activities led to a record number of collaborations.

These are mostly my own off-the-cuff thoughts. I’ll return to Mercier and Sperber’s writings on the evolution of reasoning and its modular nature next time.

Written by stewart henderson

February 1, 2020 at 11:11 am

interactional reasoning: cognitive or myside bias?

leave a comment »

In the previous post on this topic, I wrote of surprise as a motivator for questioning what we think we know about our world, a shaking of complacency. In fact we need to pay attention to the unexpected, because of its greater potential for harm (or benefit) than the expected. It follows that expecting the unexpected, or at least being on guard for it, is a reasonable approach. Something which disconfirms our expectations, can teach us a lot – it might be the ugly fact that undermines a beautiful theory. So, it’s in our interest to watch out for, and even seek out, information that undermines our current knowledge – though it might be pointed out that it’s rarely the person who puts forward a theory who discovers the inconvenient data that undermines it. The philosopher Karl Popper promoted ‘falsificationism’ as a way of testing and tightening our knowledge, and it’s interesting that the very title of his influential work Conjectures and refutations speaks to an interactive approach towards reasoning and evaluating ideas. 

In The enigma of reason, Mercier and Sperber argue that confirmation bias can best be explained by the fact that, while most of our initial thinking about a topic is of the heuristic, fast-and-frugal kind, we then spend a great deal more time, when asked about our reasoning re a particular decision, developing post-hoc justifications. Psychological research has borne this out. The authors suggest that this is more a defence of the self, and of our reputation. They suggest that it’s more of a myside bias than a confirmation bias. Here’s an interesting example of the effect:

Deanna Kuhn, a pioneering scholar of argumentation and cognition, asked participants to take a stand on various social issues – unemployment, school failure and recidivism. Once the participants had given their opinion, they were asked to justify it. Nearly all participants obliged, readily producing reasons to support their point of view. But when they were asked to produce counterarguments to their own view, only 14 percent were consistently able to do so, most drawing a blank instead.

Mercier & Sperber, The enigma of reason, pp213-4

The authors give a number of other examples of research confirming this tendency, including one in which the participants were divided into two groups, one with high political knowledge and another with limited knowledge. The low-knowledge group were able to provide twice as many arguments for their view of an issue as arguments against, but the high-knowledge performed even more poorly, being unable to provide any arguments against. ‘Greater political knowledge only amplified their confirmation bias’. Again, the reason for this appears to be reputational. The more justifications you can find for your views and decisions, the more your reputation is enhanced, at least in your own mind. There seems no obvious benefit in finding arguments against yourself.

All of this seems very negative, and even disturbing. And it’s a problem that’s been known about for centuries. The authors quote a great passage from Francis Bacon’s Novum Organum:

The human understanding when it has once adopted an opinion… draws all things else to support and agree with it. And though there be a greater number and weight of instances to be found on the other side, yet these it either neglects and despises, or else by some distinction sets aside and rejects, in order that by this great and pernicious predetermination the authority of its former conclusions may remain inviolate.

Yet it isn’t all bad, as we shall see in future posts…

Reference

Hugo Mercier and Dan Sperber, The enigma of reason, 2017

Written by stewart henderson

January 29, 2020 at 1:44 pm

interactional reasoning and confirmation bias – introductory

with one comment

I first learned about confirmation bias, and motivated reasoning, through my involvement with skeptical movements and through the Skeptics’ Guide to the Universe (SGU) podcast. As has been pointed out by the SGU and elsewhere, this confirmation bias, this strong tendency to acknowledge and support views, about any topic, that confirm our own, and to dismiss or avoid listening to views from the opposite side, is a feature of liberal and conservative thought in equal measure, as well as being as much a feature of highly credentialed public intellectuals’ thought as it is for the thinking of your average unlearned sot. The problem of confirmation bias, this ‘problem in our heads’, has been blamed for the current social media maladies we supposedly suffer from, creating increasingly partisan echo-chambers in which we allow ourselves, or are ‘driven by clicks’, to be shut off from opposing views and arguments.

But is confirmation bias quite the bogey it’s generally claimed to be? Is it possibly an evolved feature of our reasoning? This raises fundamental questions about the very nature of what we call reason, and how and why it evolved in the first place. Obviously I’m not going to be able to deal with this Big Issue in the space of the short blog pieces I’ve been writing recently, so it’ll be covered by a number of posts. And, just as obviously, my questioning of confirmation bias hasn’t sprung from my own somewhat limited genius – it pains me to admit – but from some current reading material.

The enigma of reason: a new theory of human understanding, by research psychologists Hugo Mercier and Dan Sperber, is a roolly important and timely piece of work, IMHO. So important that I launch into any attempt to summarise it with much trepidation. Anyway, their argument is that reasoning is largely an interactive tool, and evolved as such. They contrast the interactive view of reason with the ‘intellectualist’ view, which begins with Aristotle and his monumentally influential work on logic and logical fallacies. So with that in mind, they tackle the issue of confirmation bias in chapter 11 of their book, entitled ‘Why is reason biased?’

The authors begin the chapter with a cautionary tale, of sorts. Linus Pauling, winner of two Nobel Prizes and regarded by his peers as perhaps the most brilliant biochemist of the 20th century, became notoriously obsessed with the healing powers of vitamin C, in spite of mounting evidence to the contrary, raising the question as to how such a brilliant mind could get it so wrong. And perhaps a more important question – if such a mind could be capable of such bias, what hope is there for the rest of us?

So the authors look more closely at why bias occurs. Often it’s a matter of ‘cutting costs’, that is, the processing costs of cognition. An example is the use of the ‘availability heuristic’, which Daniel Kahneman writes about in Thinking fast and slow, where he also describes it as WYSIWTI (what you see is what there is). If, because you work in a hospital, you see many victims of road accidents, you’re liable to over-estimate the number of road accidents that occur in general. Or, because most of your friends hold x political views, you’ll be biased towards thinking that more people hold x political views than is actually the case. It’s a kind of fast and lazy form of inferential thinking, though not always entirely unreliable. Heuristics in general are described as ‘fast and frugal’ ways of thinking, which save a lot in cognitive cost while losing a little in reliability. In fact, as research has shown (apparently) sometimes heuristics can be more reliable than pains-taking, time-consuming analysis of a problem.

One piece of research illustrative of fast-and-frugal cognitive mechanisms involves bumble-bees and their strategies to avoid predators (I won’t give the details here). Why not? Reasoning as an evolved mechanism is surely directed first and foremost at our individual survival. To be more preservative than right. It follows that some such mechanism, whether we call it reasoning or not, exists in more or less complex form in more or less complex organisms. It also follows from this reasoning-for-survival outlook, that we pay far more attention to something surprising that crops up in our environment than routine stuff. As the authors point out:

Even one-year-old babies expect others to share their surprise. When they see something surprising, they point toward it to share their surprise with nearby adults. And they keep pointing until they obtain the proper reaction or are discouraged by the adults’ lack of reactivity.

Mercier & Sperber, The enigma of reason, p210

Needless to say, the adults’ reactions in such an everyday situation are crucial for the child – she learns that what surprised her is perhaps not so surprising, or is pleasantly surprising, or is dangerous, etc. All of this helps us in fast-and-frugal thinking from the very start.

Surprises – events and information that violates our expectations – are always worth paying attention to, in everyday life, for our survival, but also in our pursuit of accurate knowledge of the world, aka science. More about that, and confirmation bias, in the next post.

Reference

The enigma of reason: a new theory of human understanding, by Hugo Mercier & Dan Sperber, 2017

Written by stewart henderson

January 28, 2020 at 2:13 pm

preliminary thoughts on reasoning and reputation

leave a comment »

 

In my youth I learned about syllogisms and modus ponens and modus tollens and the invalidity of arguments ad hominem and reductio ad absurdum, and valid but unsound arguments and deduction and induction and all the rest, and even wrote pages filled with ps and qs to get myself clear about it all, and then forgot about it. All that stuff was only rarely applied to everyday life, where, it seemed, our reasoning, though important, was more implicit and intuitive. What I did notice though – being a bit of a loner – was that when I did have a disagreement with someone which left a bitter taste in my mouth, I would afterwards go over the argument in my head to make it stronger, more comprehensive, more convincing and bullet-proof (and of course I would rarely get the chance to present this new and improved version). But interestingly, as part of this process, I would generally make my opponent’s argument stronger as well, even to the point of conceding some ground to her and coming to a reconciliation, out of which both of us would be reputationally enhanced.

In fact, I have to say I spend quite a bit of time having these imaginary to-and-fros, not only with ‘real people’, but often with TV pundits or politicians who’ll never know of my existence. To take another example, when many years ago I was accused of a heinous crime by a young lad to whom I was a foster-carer, I spent excessive amounts of time arguing my defence against imaginary prosecutors of fiendish trickiness, but the case was actually thrown out without my ever having, or being allowed, to say a word in a court-house, other than ‘not guilty’.

So, is all this just so much wasted energy? Well, of course not. For example, I’ve used all that reflection on the court case to give, from my perspective, a comprehensive account of what happened and why, of my view of the foster-care system and its deficiencies, of the failings of the police in the matter and so forth, to friends and interested parties, as well as in writing on my blog. And it’s the same with all the other conversations with myself – they’ve sharpened my view of the matter in hand, of people’s motivations for holding different views (or my view of their motivations), they’ve caused me to engage in research which has tightened or modified my position, and sometimes to change it altogether.

All of this is preliminary to my response to reading The enigma of reason, by Dan Sperber and Hugo Mercier, which I’m around halfway through. One of the factors they emphasise is this reputational aspect of reason. My work to justify myself in the face of a false allegation was all about restoring or shoring up my reputation, which involved not just explaining why I could not have done what I was accused of doing, but explaining why person x would accuse me of doing it, knowing I would have to contend with ‘where there’s smoke there’s fire’ views that could be put, even if nobody actually put them.

So because we’re concerned, as highly socialised creatures, with our reputations, we engage in a lot of post-hoc reasoning, which is not quite to say post-hoc rationalisation, which we tend to think of as making excuses after the fact (something we do a lot of as well). A major point that Sperber and Mercier are keen to emphasise is that we largely negotiate our way through life via pretty reliable unconscious inferences and intuitions, built up over years of experience, which we only give thought to when they’re challenged or when they fail us in some way. But of course there’s much more to their ‘new theory of human understanding’ than this. In any case much of what the book has to say makes very good sense to me, and I’ll explore this further in future posts.

Written by stewart henderson

January 20, 2020 at 2:05 pm

What is inference?

leave a comment »

Don’t believe everything you read

What are you inferring?

So am I to infer from this you’re not interested?

What does inferring actually mean? What is it to ‘infer’? Does it require language? Can the birds and the bees do it? We traditionally associate inference with philosophy, which talks of deductive inference. For example, here’s a quote from Blackwell’s dictionary of cognitive science:

Inferences are made when a person (or machine) goes beyond available evidence to form a conclusion. With a deductive inference, this conclusion always follows the stated premises. In other words, if the premises are true, then the conclusion is valid. Studies of human efficiency in deductive inference involves conditional reasoning problems which follow the “if A, then B” format.

So according to this definition, only people, and machines constructed by people, can do it, deductively or otherwise. However, psychologists have pretty thoroughly demolished this view in recent years. In ‘Understanding Inference’, section 2 of their book The enigma of reason, cognitive psychologists Hugo Mercier and Dan Sperber explore our developing view of the concept.

Inference is largely based on experience. Think of Pavlov and his dogs. In his famous experiment he created an inferential association in the dogs’ minds between a bell and dinner. Hearing the bell thus set off salivation in expectation of food. The bell didn’t cause the salivation (or it wasn’t the ultimate cause), the connection was in the mind of the dog. The hearing of the bell set off a basic thought process which brought on the salivation. The dog inferred from experience, manipulated by the experimenter, that food was coming.

Mercier and Sperber approvingly quote David Hume’s common sense ideas about inference and its widespread application. Inference, he recognised, was a much more basic and universal tool than reason, and it was a necessary part of the toolkit of any sentient being. ‘Animals’, he wrote, ‘are not guided in these inferences by reasoning: Neither are children: Neither are the generality of mankind, in their ordinary actions and conclusions. Neither are philosophers themselves, who, in all the active parts of life, are, in the main, the same with the vulgar…. Nature must have provided some other principle, of more ready, and more general use and application; nor can an operation of such immense consequence in life, as that of inferring effects from causes, be trusted to the uncertain process of reasoning and argumentation’.

This is a lovely example of Humean skepticism, which flies in the face of arid logicalism, and recognises that the largely unconscious process of inference, which we would now recognise as a product of evolution, a basic survival mechanism, is more reliable in everyday life than the most brilliantly constructed logical systems.

The point is that we make inferences more or less constantly, and mostly unconsciously. The split-second decisions made in sport, for example, are all made, if not unconsciously, then with an automaticity not attributable to reason. And most of our life is lived with a similar lack of deep reflection, from inference to inference, like every other animal. Inference, then, to quote Mercier and Sperber’s gloss on Hume, is simply ‘the extraction of new information from information already available, whatever the process’. It’s what helps us slip the defender and score a goal in soccer, or prompts us to check the batteries when the remote stops working, or moves us to look forward to break-time when we smell coffee. It’s also what wags your dog’s tail when she hears familiar footsteps approaching the house.

There’s a lot more to be said, of course…

Written by stewart henderson

December 3, 2019 at 9:53 pm

Bayesian probability, sans maths (mostly)

leave a comment »

Bayesian stuff – it gets more complicated, apparently

Okay time to get back to sciency stuff, to try to get my head around things I should know more about. Bayesian statistics and probability have been brought to the periphery of my attention many times over the years, but my current slow reading of Daniel Kahneman’s Thinking fast and slow has challenged me to master it once and for all (and then doubtless to forget about it forevermore).

I’ve started a couple of pieces on this topic in the past week or so, and abandoned them along with all hope of making sense of what is no doubt a doddle for the cognoscenti, so I clearly need to keep it simple for my own sake. The reason I’m interested is because critics and analysts of both scientific research and political policy-making often complain that Bayesian reasoning is insufficiently utilised, to the detriment of such activities. I can’t pretend that I’ll be able to help out though!

So Thomas Bayes was an 18th century English statistician who left a theorem behind in his unpublished papers, apparently underestimating its significance. The person most responsible for utilising and popularising Bayes’ work was the French polymath Pierre-Simon Laplace. The theorem, or rule, is captured mathematically thusly:

{\displaystyle P(A\mid B)={\frac {P(B\mid A)P(A)}{P(B)}}}

where A and B are events, and P(B), that is, the probability of event B, is not equal to zero. In statistics, the probability of an event’s occurrence ranges from 0 to 1 – meaning zero probability to total certainty.

I do, at least, understand the above equation, which, wordwise, means that the probability of A occurring, given that B has occurred, is equal to the probability of B occurring, given that A has occurred, multiplied by the probability of A’s occurrence, all divided by the probability of B’s occurrence. However, after tackling a few video mini-lectures on the topic I’ve decided to give up and focus on Kahneman’s largely non-mathematical treatment with regard to decision-making. The theorem, or rule, presents, as Kahneman puts it, ‘the logic of how people should change their mind in the light of evidence’. Here’s how Kahneman first describes it:

Bayes’ rule specifies how prior beliefs… should be combined with the diagnosticity of the evidence, the degree to which it favours the hypothesis over the alternative.

D Kahneman, Thinking fast and slow, p154

In the most simple example – if you believe that there’s a 65% chance of rain tomorrow, you really need to believe that there’s a 35% chance of no rain tomorrow, rather than any alternative figure. That seems logical enough, but take this example re US Presidential elections:

… if you believe there’s a 30% chance that candidate x will be elected President, and an 80% chance that he’ll be re-elected if he wins first time, then you must believe that the chances that he will be elected twice in a row are 24%.

This is also logical, but not obvious to a surprisingly large percentage of people. What appears to ‘throw’ people is a story, a causal narrative. They imagine a candidate winning, somewhat against the odds, then proving her worth in office and winning easily next time round – this story deceives them into defying logic and imagining that the chance of her winning twice in a row is greater than that of winning first time around – which is a logical impossibility. Kahneman places this kind of irrationalism within the frame of system 1 v system 2 thinking – roughly equivalent to intuition v concentrated reasoning. His solution to the problem of this kind of suasion-by-story is to step back and take greater stock of the ‘diagnosticity’ of what you already know, or what you have predicted, and how it affects any further related predictions. We’re apparently very bad at this.

There are many examples throughout the book of failure to reason effectively from information about base rates, often described as ‘base-rate neglect’. A base rate is a statistical fact which should be taken into account when considering a further probability. For example, when given information about the character of a a fictional person T, information that was deliberately designed to suggest he was stereotypical of a librarian, research participants gave the person a much higher probability of being a librarian rather than a farmer, even though they knew, or should have known, that the number of persons employed as farmers was higher by a large factor than those employed as librarians (the base rate of librarians in the workforce). Of course the degree to which the base rate was made salient to participants affected their predictions.

Here’s a delicious example of the application, or failure to apply, Bayes’ rule:

A cab was involved in a hit-and-run at night. Two cab companies, Green Cabs and Blue Cabs, operate in the city. You’re given the following data:

– 85% of the cabs in the city are Green, 15% are Blue.

– A witness identified the cab as Blue. The court tested the reliability of the witness under the circumstances that existed on the night of the accident and concluded that the witness correctly identified each one of the two colours 80% of the time and failed 20% of the time.

What is the probability that the car involved in the accident was Blue rather than Green?

D Kahneman, Thinking fast and slow, p166

It’s an artificial scenario, granted, but if we accept the accuracy of those probabilities, we can say this: given that the base rate of Blue cars is 15%, and the probability of the witness identifying the car accurately is 80%, we have this figure for the dividend – (.15/.85) x (.8/.2) =.706. Dividing this by the range of probabilities plus the dividend (1.706) gives approximately 41%.

So how close were the research participants to this figure? Most participants ignored the statistical data – the base rates – and gave the figure of 80%. They were more convinced by the witness. However, when the problem was framed differently, by providing causal rather than statistical data, participants’ guesses were more accurate. Here’s the alternative presentation of the scenario:

You’re given the following data:

– the two companies operate the same number of cabs, but Green cabs are involved in 85% of accidents

– the information about the witness is the same as previously presented

The mathematical result is the same, but this time the guesses were much closer to the correct figure. The difference lay in the framing. Green cabs cause accidents. That was the fact that jumped out, whereas in the first scenario, the fact that most clearly jumped out was that the witness identified the offending car as Blue. The statistical data in scenario 1 was largely ignored. In the second scenario, the witness’s identification of the Blue car moderated the tendency to blame the Green cars, whereas in scenario 1 there was no ‘story’ about Green cars causing accidents and the blame shifted almost entirely to the Blue cars, based on the witness’s story. Kahneman named his chapter about this tendency ‘Causes trump statistics’.

So there are causal and statistical base rates, and the lesson is that in much of our intuitive understanding of probability, we simply pay far more attention to causal base rates, largely to our detriment. Also, our causal inferences tend to be stereotyped, so that only if we are faced with surprising causal rates, in particular cases and not presented statistically, are we liable to adjust our probabilistic assessments. Kahneman presents some striking illustrations of this in the research literature. Causal information creates bias in other areas of behaviour assessment too, of course, as in the phenomenon of regression to the mean, but that’s for another day, perhaps.

Written by stewart henderson

August 27, 2019 at 2:52 pm

discussing mental health and illness

leave a comment »

Canto: I’ve been told I’m on the autism spectrum, by someone who’s not on it, presumably, but who’s also not an expert on such things, but I’m not sure who is.

Jacinta: Well of course we’re all on the autism spectrum, it depends on your location on it, I suppose, if you need to worry. ‘You’re sick’ is one of the oldest lines of abuse, but I’m reminded of a passage in The moral landscape, which I’m currently rereading. He describes a funny-but-not-so-funny piece of research by one D L Rosenhan:

… in which he and seven confederates had themselves committed to psychiatric hospitals in five different states in an effort to determine whether mental health professionals could detect the presence of the sane among the mentally ill. In order to get committed, each researcher complained of hearing a voice repeating the words ’empty’, ‘hollow and ‘thud’. Beyond that, each behaved perfectly normally. Upon winning admission to the psychiatric ward, the pseudo-patients stopped complaining of their symptoms and immediately sought to convince the doctors, nurses and staff that they felt fine and were fit to be released. This proved surprisingly difficult. While these genuinely sane patients wanted to leave the hospital, repeatedly declared that they experienced no symptoms, and became ‘paragons of cooperation’, their average length of hospitalisation was 19 days (ranging from 7 to 52 days), during which they were bombarded with an astounding range of powerful drugs (which they discreetly deposited in the toilet. None were pronounced healthy. Each was ultimately discharged with a diagnosis of schizophrenia ‘in remission’ (with the exception of one who received a diagnosis of bipolar disorder). Interestingly, while the doctors, nurses and staff were apparently blind to the presence of normal people on the ward, actual mental patients frequently remarked on the obvious sanity of the researchers, saying things like ‘You’re not crazy – you’re a journalist’.

S. Harris, The moral landscape, p142

Canto: Well, that’s a fascinating story, but let’s get skeptical. Has that study been replicated? We know how rarely that happens. And there are quite a few other questions worth asking. Wouldn’t most of the staff etc have been primed to assume these patients had a genuine mental illness? And surely only a small percentage would have had the authority to make a decision either way. Who exactly had them committed, what was the process, and what was the relationship between those doing the diagnosis and those engaging in treatment and daily care? Was there any fudging on the part of the pseudo-patients (who were apparently also the researchers) in order to prove their point (which presumably was that mental illness can be easily shammed)? And wouldn’t you expect other patients, many of whom wouldn’t believe in their own mental problems, to be supportive of the sanity of those around them?

Jacinta: Okay, those are some valid points, but are you prepared to accept that a lot of these mental conditions, such as bipolar disorder, borderline personality disorder (the name speaks volumes), attention deficit disorder, narcissistic whatever disorder and so on, are a little flakey around the edges?

Canto: Maybe, but with solid centres I’m sure. Depression is probably the most common of those mental conditions, and too much skepticism on that count could obviously lead to disaster. Take the case of South Korea, which has one of the highest suicide rates in the world. There appears to be a nationwide skepticism about mental health issues there, which clashes with high stress levels to create a crisis of care. Professional help is rarely sought and isn’t widely available. It raises the question of the value of skepticism in some areas.

Jacinta: I wonder if the rapid advances in neurophysiology can help us here. Mental health is all about the brain. In the above quote, the pseudo-patients were mostly diagnosed with schizophrenia. That’s surprising. In my naïveté I would’ve thought there was a neurological test for schizophrenia by now.

Canto: Well, the experiment described in The moral landscape dates from the early seventies, but currently there’s still no diagnostic test for schizophrenia based on the brain itself, it’s all about such symptoms as specific delusions and hallucinations, which could still be shammed I suppose, if anyone wanted to. But what about borderline personality disorder – I was told recently that it’s very real, in spite of the name.

Jacinta: Well, there appears to be a mystery about the causes, and a general confusion about the symptoms, which seem to be rather wide-ranging – though I suppose if a patient displays several of them you can safely conclude that she’s stark staring bonkers.

Canto: Yes that’s a thing about mental illness, quite seriously. You don’t need to be an expert to notice when people are behaving in a way that’s detrimental to themselves and others, especially if it’s a sharp deviation from previous behaviour. And if it’s a slow descent, as quite often depression can be, it’s harder to pick from that person’s standard lugubrious personality, so to speak. And in the end, maybe the labelling isn’t so important as the help and the treatment. But then, people love a label – they want to know precisely what’s wrong with them.

Jacinta: I suppose the difficulty with mental illness and labelling, as opposed to labelling other more ‘physical’ illnesses or injuries, is the near-ineffable complexity of the brain. For example, I notice that among the symptoms of borderline personality disorder are apparent behaviours that don’t really cohere in any way. This site places the symptom of uncertainty and indecisiveness along with extreme risk-taking and impulsiveness, and then there is fear of abandonment, and other odd behaviours which seem to head in different directions, seeming to have one thing alone in common – being extreme or abnormal.

Canto: Yes, again, behaviour that tends to harm the self or others.

Jacinta: At the moment, I think there are still too few connections between neurology and psychiatry and the treatment of mental illness, though it’s a matter of enormous complexity. I had thought, for example, that the role of the neurotransmitter dopamine was essential to our understanding of schizophrenia, but more recent research has found that the neurochemistry of the condition involves many other factors, including glutamate, GABA, acetylcholine and serotonin. There’s so much more work to be done. But we also need to be very aware of the social and cultural conditions that tip people over the edge into mental illness. Changes in the way our brain is functioning might be seen as proximal causes of an increase in depression and suicide, but it’s more likely that the ultimate causes have to do with the stresses that particular organisations, societies and cultures impose upon us.

Written by stewart henderson

June 30, 2019 at 12:45 pm

Lessons from the Trump travesty?

leave a comment »

Consider this passage from The moral landscape, by Sam Harris:

As we better understand the brain, we will increasingly understand all of the forces – kindness, reciprocity, trust, openness to argument, respect for evidence, intuitions of fairness, impulse control, the mitigation of aggression, etc – that allow friends and strangers to collaborate successfully on the common projects of civilisation…

These are indeed, and surely, the forces, or traits, we should want in order to have the best social lives. And they involve a richly interactive relationship between the social milieu – the village, the tribe, the family, the state – and the individual brain, or person. They are also, IMHO, the sorts of traits we would hope to find in our best people – for example, our political leaders, regardless of which political faction they represent.

Now consider those traits in respect of one Donald Trump. It should be obvious to any reasoning observer that he is deficient in all of them. And I mean deficient to a jaw-dropping, head-scratching degree. So there are two questions worth posing here.

  1. How could a person, so obviously deficient in all of the traits we would consider vital to the project of civilisation, have been created in a country that prides itself on being a leader of the free, democratic, civilised world?
  2. How could such a person rise to become the President of that country – which, whether or not you agree with its self-description of its own moral worth, is undoubtedly the world’s most economically and militarily powerful nation, and a world-wide promoter of democracy (in theory if not always in practice)?

I feel for Harris, whose book was published in 2010, well before anyone really had an inkling of what was to come. In The moral landscape he argues for objective moral values, or moral realism, but you don’t have to agree with his general philosophical position to acknowledge that the advancement of civilisation is largely dependent on the above-quoted traits. But of course, not everyone acknowledges this, or has ever given a thought to the matter. It’s probably true that most people, in the USA and elsewhere, don’t give a tinker’s cuss about the advancement of civilisation.

So the general answer to question one is easy enough, even if the answer in any particular case requires detailed knowledge. I don’t have such knowledge of the family background, childhood and even pre-natal influences that formed Trump’s profoundly problematic character, but reasonable inferences can be made, I think. For example, one of Trump’s most obvious traits is his complete disregard for the truth. To give one trivial example among thousands, he recently described Meghan Markle, now the Duchess of Sussex, as ‘nasty’, in a televised interview. In another televised interview, very shortly afterwards, he denied saying what he was clearly recorded as saying. This regular pattern of bare-faced lying, without any concern about being found out, confronted by his behaviour, or suffering consequences, says something. It says that he has rarely if ever been ‘corrected’ for breaking this commandment, and, very likely, has been rewarded for it from earliest childhood – this reward being likely in the form of amusement, acclamation, and encouragement in this practice. Since, as we know, Trump was a millionaire before he was old enough to pronounce the word, the son of a self-possessed, single-minded property shark, who bestowed on the child a thousand indications of his own importance, it’s more than likely that he grew up in a bubble-world in which self-interest and duplicity were constantly encouraged and rewarded, a world of extreme materialism, devoid of any intellectual stimulation. This is the classic ‘spoilt child’ I’ve already referred to. Often, when a child like this has to stand up on his own feet, his penchant for lying, his contempt for the law and his endless attention-seeking will get him into legal trouble, but Trump appears to have stayed under the wing of his father for much longer than average. His father bailed him out time and time again when he engaged in dumb business deals, until he learned a little more of the slyness of white-collar crime (including learning how to steal from his father). His father’s cronies in the crooked business and legal world would also have taught him much.

Trump is surely a clear-cut case of stunted moral development, the darling child who was encouraged, either directly or though observation of the perverse world of white-collar crime that surrounded him, to listen to no advice but his own, to have devotees rather than friends, and to study and master every possible form of exploitation available to him. Over time, he also realised that his habit of self-aggrandisement could be turned to advantage, and that it would continue to win people, in ever greater numbers, if effectively directed. Very little of this, of course, was the result of what psychologists describe as system 2 thinking – and it would be fascinating to study Trump’s brain for signs of activity in the prefrontal cortex – it was more about highly developed intuitions about what he could get away with, and who he could impress with his bluster.

Now, I admit, all of this is somewhat speculative. Given Trump’s current fame, there will doubtless be detailed biographies written about his childhood and formative years, if they haven’t been written already. My point here is that, given the environment of absurd and dodgy wealth to be found in small pockets of US society, and given the ‘greed is good’ mantra that many Americans (and of course non-Americans) swallow like the proverbial kool-aid, it isn’t so surprising that white-collar crime isn’t dealt with remotely adequately, and that characters like Trump dot the landscape, like pus-oozing pimples on human skin. In fact there are plenty of people, rich and poor alike, who would argue that tax evasion shouldn’t even be a crime… while also arguing that the USA, unlike every other western democracy, can’t afford universal medicare.

So that’s a rough-and-ready answer to question one. Question two has actually been addressed in a number of previous posts, but I’ll address it a little differently here.

The USA is, I think, overly obsessed with the individual. It’s a hotbed of libertarianism, an ideology entirely based on the myth of individualism and ‘individual freedom’, and it’s no surprise that Superman, Batman and most other super-heroes were American products. It’s probable that a sizeable section of Trump’s base see him in ‘superhero’ terms, someone not cut in the mould of Washington politicians, someone larger than life, someone almost from outer space in that he talks and acts differently from normal human beings let alone politicians. This makes him exciting and enlivening – like a comic book. And they’re happy to go along for the ride regardless of whether their lives are improved.

I must admit, though, that I’m mystified when I hear Trump supporters still saying ‘he’s done so much for our country’, when it’s fairly clear to me that, apart from cruelly mistreating asylum-seekers, he’s done little other than tweet insults and inanities and cheat at golf. The massive neglect of every aspect of federal government under his ‘watch’ will take decades to repair, and the question of whether the USA will ever recover from the tragi-comedy of this presidency is hard to answer.

But as to how Trump was ever allowed to become President, it’s all about a dangerously flawed political system, one that has too few safeguards against the simplistic populism that the ancient Greek philosophers railed against 2500 years ago. Unabashed elitists, they were deeply concerned that ‘the mob’ would be persuaded by a charismatic blowhard who promised everything and delivered nothing – or, worse than nothing, disaster. They were concerned because they witnessed it in their lifetime.

The USA today is sadly lacking in those safeguards. It probably thought the safeguards were adequate, until Trump came along. For example, it was expected – among gentlemen, so to speak – that successful candidates would present their tax returns, refuse to turn the Presidency to their own profit, support their own intelligence services and justice department, treat long-time allies as allies and long-time adversaries as adversaries, and, in short, display at least some of the qualities I’ve quoted from Harris at the top of this post.

The safeguards, however, need to go much further than this, IMHO. The power of the Presidency needs to be sharply curtailed. A more distributed, collaborative and accountable system needs to be developed, a team-based system (having far more women in leadership positions would help with this), not a system which separates the President/King and his courtiers/administration from congress/parliament. Pardoning powers, veto powers, special executive powers, power to select unelected officials to high office, power to appoint people to the judiciary – all of these need to be reined in drastically.

Of course, none of this is likely to happen in the near future – and I still believe blood will flow before Trump is heaved out of office. But I do hope that the silver lining to the cloud of this presidency is that, in the long term, a less partisan, less individual-based federal system will be the outcome of this Dark Age.

Written by stewart henderson

June 14, 2019 at 5:00 pm

the self and its brain: free will encore

leave a comment »


yeah, right

so long as, in certain regions, social asphyxia shall be possible – in other words, and from a yet more extended point of view, so long as ignorance and misery remain on earth, books like this cannot be useless.

Victor Hugo, author’s preface to Les Miserables

Listening to the Skeptics’ Guide podcast for the first time in a while, I was excited by the reporting on a discovery of great significance in North Dakota – a gigantic graveyard of prehistoric marine and other life forms precisely at the K-T boundary, some 3000 kms from where the asteroid struck. All indications are that the deaths of these creatures were instantaneous and synchronous, the first evidence of mass death at the K-T boundary. I felt I had to write about it, as a self-learning exercise if nothing else.

But then, as I listened to other reports and talking points in one of SGU’s most stimulating podcasts, I was hooked by something else, which I need to get out of the way first. It was a piece of research about the brain, or how people think about it, in particular when deciding court cases. When Steven Novella raised the ‘spectre’ of ‘my brain made me do it’ arguments, and the threat that this might pose to ‘free will’, I knew I had to respond, as this free will stuff keeps on bugging me. So the death of the dinosaurs will have to wait.

The more I’ve thought about this matter, the more I’ve wondered how people – including my earlier self – could imagine that ‘free will’ is compatible with a determinist universe (leaving aside quantum indeterminacy, which I don’t think is relevant to this issue). The best argument for this compatibility, or at least the one I used to use, is that, yes, every act we perform is determined, but the determining factors are so mind-bogglingly complex that it’s ‘as if’ we have free will, and besides, we’re ‘conscious’, we know what we’re doing, we watch ourselves deciding between one act and another, and so of course we could have done otherwise.

Yet I was never quite comfortable about this, and it was in fact the arguments of compatibilists like Dennett that made me think again. They tended to be very cavalier about ‘criminals’ who might try to get away with their crimes by using a determinist argument – not so much ‘my brain made me do it’ as ‘my background of disadvantage and violence made me do it’. Dennett and other philosophers struck me as irritatingly dismissive of this sort of argument, though their own arguments, which usually boiled down to ‘you can always choose to do otherwise’ seemed a little too pat to me. Dennett, I assumed, was, like most academics, a middle-class silver-spoon type who would never have any difficulty resisting, say, getting involved in an armed robbery, or even stealing sweets from the local deli. Others, many others, including many kids I grew up with, were not exactly of that ilk. And as Robert Sapolsky points out in his book Behave, and as the Dunedin longitudinal study tends very much to confirm, the socio-economic environment of our earliest years is largely, though of course not entirely, determinative.

Let’s just run though some of this. Class is real, and in a general sense it makes a big difference. To simplify, and to recall how ancient the differences are, I’ll just name two classes, the patricians and the plebs (or think upper/lower, over/under, haves/have-nots).

Various studies have shown that, by age five, the more plebby you are (on average):

  • the higher the basal glucocorticoid levels and/or the more reactive the glucocorticoid stress response
  • the thinner the frontal cortex and the lower its metabolism
  • the poorer the frontal function concerning working memory, emotion regulation , impulse control, and executive decision making.

All of this comes from Sapolsky, who cites all the research at the end of his book. I’ll do the same at the end of this post (which doesn’t mean I’ve analysed that research – I’m just a pleb after all. I’m happy to trust Sapolski). He goes on to say this:

moreover , to achieve equivalent frontal regulation, [plebeian] kids must activate more frontal cortex than do [patrician] kids. In addition, childhood poverty impairs maturation of the corpus collosum, a bundle of axonal fibres connecting the two hemispheres and integrating their function. This is so wrong foolishly pick a poor family to be born into, and by kindergarten, the odds of your succeeding at life’s marshmallow tests are already stacked against you.

Behave, pp195-6

Of course, this is just the sort of ‘social asphyxia’ Victor Hugo was at pains to highlight in his great work. You don’t need to be a neurologist to realise all this, but the research helps to hammer it home.

These class differences are also reflected in parenting styles (and of course I’m always talking in general terms here). Pleb parents and ‘developing world’ parents are more concerned to keep their kids alive and protected from the world, while patrician and ‘developed world’ kids are encouraged to explore. The patrician parent is more a teacher and facilitator, the plebeian parent is more like a prison guard. Sapolsky cites research into parenting styles in ‘three tribes’: wealthy and privileged; poorish but honest (blue collar); poor and crime-ridden. The poor neighbourhood’s parents emphasised ‘hard defensive individualism’ – don’t let anyone push you around, be tough. Parenting was authoritarian, as was also the case in the blue-collar neighbourhood, though the style there was characterised as ‘hard offensive individualism’ – you can get ahead if you work hard enough, maybe even graduate into the middle class. Respect for family authority was pushed in both these neighbourhoods. I don’t think I need to elaborate too much on what the patrician parenting (soft individualism) was like – more choice, more stimulation, better health. And of course, ‘real life’ people don’t fit neatly into these categories, there are an infinity of variants, but they’re all determining.

And here’s another quote from Sapolsky on research into gene/environment interactions.

Heritability of various aspects of cognitive development is very high (e.g. around 70% for IQ) in kids from [patrician] families but is only around 10% in [plebeian] kids. Thus patrician-ness allows the full range of genetic influences on cognition to flourish, whereas plebeian settings restrict them. In other words, genes are nearly irrelevant to cognitive development if you’re growing up in awful poverty – poverty’s adverse affects trump the genetics.

Behave, p249

Another example of the huge impact of environment/class, too often underplayed by ivory tower philosophers and the silver-spoon judiciary.

Sapolsky makes some interesting points, always research-based of course, about the broader environment we inhabit. Is the country we live in more communal or more individualistic? Is there high or low income inequality? Generally, cultures with high income inequality have less ‘social capital’, meaning levels of trust, reciprocity and cooperation. Such cultures/countries generally vote less often and join fewer clubs and mutual societies. Research into game-playing, a beloved tool of psychological research, shows that individuals from high inequality/low social capital countries show high levels of bullying and of anti-social punishment (punishing ‘overly’ generous players because they make other players look bad) during economic games. They tend, in fact, to punish the too-generous more than they punish actual cheaters (think Trump).

So the determining factors into who we are and why we make the decisions we do range from the genetic and hormonal to the broadly cultural. A couple have two kids. One just happens to be conventionally good-looking, the other not so much. Many aspects of their lives will be profoundly affected by this simple difference. One screams and cries almost every night for her first twelve months or so, for some reason (and there are reasons), the other is relatively placid over the same period. Again, whatever caused this difference will likely profoundly affect their life trajectories. I could go on ad nauseam about these ‘little’ differences and their lifelong effects, as well as the greater differences of culture, environment, social capital and the like. Our sense of consciousness gives us a feeling of control which is largely illusory.

It’s strange to me that Dr Novella seems troubled by ‘my brain made me do it’, arguments, because in a sense that is the correct, if trivial, argument to ‘justify’ all our actions. Our brains ‘make us’ walk, talk, eat, think and breathe. Brains R Us. And not even brains – octopuses are newly-recognised as problem-solvers and tool-users without even having brains in the usual sense – they have more of a decentralised nervous system, with nine mini-brains somehow co-ordinating when needed. So ‘my brain made me do it’ essentially means ‘I made me do it’, which takes us nowhere. What makes us do things are the factors shaping our brain processes, and they have nothing to do with ‘free will’, this strange, inexplicable phenomenon which supposedly lies outside these complex but powerfully determining factors but is compatible with it. To say that we can do otherwise is just saying – it’s not a proof of anything.

To be fair to Steve Novella and his band of rogues, they accept that this is an enormously complex issue, regarding individual responsibility, crime and punishment, culpability and the like. That’s why the free will issue isn’t just a philosophical game we’re playing. And lack of free will shouldn’t by any means be confused with fatalism. We can change or mitigate the factors that make us who we are in a huge variety of ways. More understanding of the factors that bring out the best in us, and fostering those factors, is what is urgently required.

just thought I’d chuck this in

Research articles and reading

Behave, Robert Sapolsky, Bodley Head, 2017

These are just a taster of the research articles and references used by Sapolsky re the above.

C Heim et al, ‘Pituitary-adrenal and autonomic responses to stress in women after sexual and physical abuse in childhood’

R J Lee et al ‘CSF corticotrophin-releasing factor in personality disorder: relationship with self-reported parental care’

P McGowan et al, ‘Epigenetic regulation of the glucocorticoid receptor in human brain associates with childhood abuse’

L Carpenter et al, ‘Cerebrospinal fluid corticotropin-releasing factor and perceived early life stress in depressed patients and healthy control subjects’

S Lupien et al, ‘Effects of stress throughout the lifespan on the brain, behaviour and cognition’

A Kusserow, ‘De-homogenising American individualism: socialising hard and soft individualism in Manhattan and Queens’

C Kobayashi et al ‘Cultural and linguistic influence on neural bases of ‘theory of mind”

S Kitayama & A Uskul, ‘Culture, mind and the brain: current evidence and future directions’.

etc etc etc

Written by stewart henderson

April 23, 2019 at 10:53 am