an autodidact meets a dilettante…

‘Rise above yourself and grasp the world’ Archimedes – attribution

Posts Tagged ‘neurology

why do our pupils dilate when we’re thinking hard?

leave a comment »

Canto: So we’re reading Daniel Kahneman’s Thinking fast and slow, among other things, at the moment, and every page has stuff worth writing about and exploring further, it’s impossible to keep up.

Jacinta: Yes with this stuff it’s a case of reading slow and slower. Or writing about it faster and faster, unlikely in our case. A lot of it might be common knowledge, but not to us, though in these first fifty pages or so he’s getting into embodied cognition, which we’ve written about, but there’s new data here that I didn’t know about but which makes a lot of sense to me.

Canto: That’s because you’ve been primed to accept this stuff haha. But I want to focus here more narrowly on experiments Kahneman did early in his career with Jackson Beatty, who went on to become the leading figure in the study of ‘cognitive pupillometry’.

Jacinta: Presumably measuring pupils, which is easy enough, while measuring cognition or cognitive processes, no doubt a deal harder.

Canto: Kahneman tells the story of an article he read in Scientific American – a mag I regularly read in the eighties, so I felt all nostalgic reading this.

Jacinta: Why’d you stop reading it?

Canto: I don’t know – I had a hiatus, then I started reading New Scientist and Cosmos. I should get back to Scientific American. All three. Anyway, the article was by Eckhard Hess, whose wife noticed that his pupils dilated when he looked at lovely nature pictures. He started looking into the matter, and found that people are judged to be more attractive when their pupils are wider and that belladonna, which is used in cosmetics, also dilates the pupils. More importantly for Kahneman, he noted ‘the pupils are sensitive indicators of mental effort’. Kahneman was looking for a research project at the time, so he recruited Beatty to help him with some experiments.

Jacinta: And the result was that our pupils dilate very reliably, and quite significantly, when we’re faced with tough problem-solving tasks, like multiplying double-digit numbers – and they constrict again on completion, so reliably that the monitoring researcher can surprise the subject by saying ‘so you’ve got the answer now?’

Canto: Yes, the subjects were arranged so the researchers could view their eyes magnified on a screen. And of course this kind of research is easy enough to replicate, and has been. My question, though, is why does the pupil dilate in response to such an internal process as concentration? We think of pupils widening to let more light in at times of dim light, that makes intuitive sense, but – in order to seek a kind of metaphorical enlightenment? That’s fascinating.

Jacinta: Well I think you’re hitting on something there. Think of attention rather than concentration. I suspect that our pupils widen when we attend to something important or interesting. As Eckhard Hess’s wife noticed when he was looking at a beautiful scene. In the case of a mathematical or logical problem we’re attending to something intently as well, and the fact that it’s internal rather than external is not so essential. We’re looking at the problem, seeing the problem as we try to solve it.

Canto: Yes but again that’s a kind of metaphorical seeing, whereas your pupils don’t dilate metaphorically.

Jacinta: Yes but it’s likely that our pupils dilate in the dark only when we’re trying to see in the dark. Making that effort. When we turn off the light at night in our bedroom before going to sleep, it’s likely that our pupils don’t dilate, because we’re not trying to see the familiar objects around us, we just want to fall asleep. So even if we leave our eyes open for a brief period, they’re not actually trying to look at anything. It’s like when you enter a classroom and see a maths problem on the board. Your eyes won’t dilate just on noticing the problem, but only when you try to solve it.

Canto: I presume there’s been research on this – like with everything we ever think of. What I’ve found is that the ‘pupillary light reflex’ is described as part of the autonomous nervous system – an involuntary system, largely, which responds ‘autonomously’, unconsciously, to the amount of light it receives. But as you say, there are probably other over-riding features, coming from the brain rather than outside. However, a pupil ‘at rest’, in a darkened room, is usually much dilated. So dilation is by no means always to do with attention or focus.

Jacinta: Well there’s a distinction made in neurology between bottom-up and top-down processing, which you’ve just alluded to, in the sense that information coming from outside, and sensed on the skin, the eye and other sensory organs, is sent ‘up’ to the brain – the Higher Authority, – which then sends down responses, in this case to dilate or contract the pupil, all that is called bottom-up processing. But researchers have found that the pupil isn’t just regulated in a bottom-up way.

Canto: And that’s where cognitive pupillometry comes in.

Jacinta: And here are some interesting research findings regarding top-down influences on pupil size. When subjects were primed with pictures relating to the sun, even if they were’nt bright, their pupils contracted more than with pictures of the moon, even if those pictures were actually brighter than the sun pictures. And even words connected to brightness made their pupils contract. There’s also been solid research to back up the speculations of Eckhard Hess, that emotional scenes, images and memories, whether positive or negative, have a dilating effect on our pupils. For example, hearing the cute sound of a baby laughing, and the disturbing sound of a baby screaming, widens our pupils, while more neutral sounds of road traffic or workplace hubub have very little effect.

Canto: Because there’s nothing, or maybe too much info, to focus our attention, surely? While the foregrounded baby’s noises stimulate our sense of wonder, of ‘what’s happening?’ We’re moved to attend to it. Actually this reminds me of something apparently unrelated but maybe not. That’s the well-known problem that we’re moved to give to a charity when one suffering child is presented in an advertisement, and less and less as we’re faced with a greater and greater number of starving children. These numbers become like distant traffic, they disperse our attention and interest.

Jacinta: Yes well that’s a whole other story, but this brings us to the most interesting of findings re top-down effects on our pupils, and the question we’ve asked in the title. A more scientific name for thinking hard is increased cognitive load, and countless experiments have shown that increasing cognitive load, for example by solving tough maths problems, or committing stacks of info to memory, correlates with increased pupillary dilation. This hard thinking is done in the prefrontal cortex, but we won’t go into detail here about its more or less contested compartments. What I will say is there’s an obvious difference between thinking and memorising, and both of these activities increase cognitive load, and pupillary dilation. Some very interesting studies relating memorising and pupillary dilation have shown that children under a certain age, unsurprisingly, are less able to hold info in short-term memory than adults. The research task was to memorise a long sequence of numbers. Monitoring of pupil response showed that the children’s pupils would constrict from their dilated state after six numbers, unlike those of adults.

Canto: So, while we may not have a definitive answer to our title question – the why question – it seems to be that cognitive load, like any load that we carry, requires the expenditure of energy, which can be manifested in the tightening of muscles in the eye which dilates the pupils. This dilation reveals, apparently, that we’re attending to something or concentrating on something. I can see some real-world applications. Imagine, as a teacher, having a physics class, say. You could get your students to wear special glasses that monitor the dilation and constriction of their pupils – I’m sure such devices could be rigged up, and connected to a special console at the teacher’s desk, so he could see who in the class was paying close attention and who was off in dreamland…

Jacinta: Yeah right haha – even if that was physically possible, there are just a few privacy issues there, and how would you know if the pupillary dilation was due to the fascinating complexities of electromagnetism or the delightful profile of your student’s object of fantasy a couple of seats away? Or how could you know if their apparent concentration had anything much to do with comprehension? Or how would you know if their apparent lack of concentration was to do with disinterest or incomprehension or the fact they were way ahead of you in comprehension?

Canto: Details details. Small steps. One way of finding out all that is by asking them. At least such monitoring would give you some clues to go by. I look forward to this brave new transhumanising world….

References

Daniel Kahneman, Thinking fast and slow, 2012

https://kids.frontiersin.org/article/10.3389/frym.2019.00003

Torres A and Hout M (2019) Pupils: A Window Into the Mind. Front. Young Minds. 7:3. doi: 10.3389/frym.2019.00003

Written by stewart henderson

June 24, 2019 at 11:18 am

On Massimo Pigliucci on scientism 2: brains r us

with 2 comments

neuroethics is coming…

In his Point of Inquiry interview, Pigliucci mentions Sam Harris’s book The Moral Landscape a couple of times. Harris seeks to make the argument, in that book, that we can establish, sometime in the future, a science of morality. That is, we can be factual about the good life and its opposite, and we can be scientific about the pathways, though there might be many, that lead towards the good life and away from the bad life. I’m in broad agreement about this, though for pragmatic reasons I would probably prefer the term ‘objective’ to ‘scientific’. Just because it doesn’t frighten the horses so much. As mentioned in my previous post, I don’t want to get hung up on terminology. Science obviously requires objectivity, but it doesn’t seem clear to everyone that morality requires objectivity too. I think that it does (as did, I presume, the authors of the Universal Declaration of Human Rights), and I think Harris argues cogently that it does, based on our well-being as a social species. But Pigliucci says this about Harris’s project:

When Sam Harris wrote his famous book The Moral Landscape, the subtitle was ‘How science can solve moral questions’ – something like that. Well that’s a startling question if you think about it because – holy crap! So I would assume that a typical reader would buy that book and imagine that now he’s going to get answers to moral questions such as whether abortion is permissible and in what circumstances, or the death penalty or something… And get them from say physics or chemistry, maybe neuroscience, since Harris has a degree in neuroscience..

Pigliucci makes some strange assumptions about the ‘typical reader’ here. Maybe I’m a long way from being a ‘typical reader’ (don’t we all want to think that?) but, to me the subtitle (which is actually ‘How science can determine human values’) suggests, again, methodology. By what methods, or by what means, can human value – that’s to say what is most valuable to human well-being – be determined. I would certainly not have expected, reading the actual sub-title, and considering the main title of the book, answers to specific moral questions. And I certainly wouldn’t expect answers to those questions to come from physics or chemistry. Pigliucci just mentions those disciplines to make Harris’s views seem more outrageous. That’s not good faith arguing. Neuroscience, however, is closer to the mark. Our brains r us, and if we want to know why a particular mammal behaves ‘badly’, or with puzzling altruism, studying the animal’s brain might be one among many places to start. And yet Pigliucci makes this statement later on re ‘scientistic’ scientists

It seems to me that the fundamental springboard for all this is a combination of hubris, the conviction that what they do is the most important thing – in the case of Sam Harris for instance, it turns out at the end of the book [The Moral Landscape] it’s not just science that gives you the answers, it’s neuroscience that gives you the answers. Well, surprise surprise, he’s a neuroscientist.

This just seems silly to me. Morality is about our thoughts and actions, which start with brain processes. Our cultural practices affect our neural processes from our birth, and even before our conception, given the cultural attitudes and behaviours of our future parents. It’s very likely that Harris completed his PhD in cognitive neuroscience because of his interest in human behaviour and its ethical consequences (Harris is of course known for his critique of religion, but there seems no doubt that his greatest concerns about religious belief are at base concerns about ethics). Yet according to Pigliucci, had Harris been a physicist he would have written a book on morality in terms of electromagnetic waves or quantum electrodynamics. And of course Pigliucci doesn’t examine Harris’s reasoning as to why he thinks science, and most particularly neuroscience and related disciplines, can determine human values. He appears to simply dismiss the whole project as hubristic and wrong-headed.

I know that I’m being a little harsh in critiquing Pigliucci based on a 20-minute interview, but there doesn’t seem any attempt, at least here, to explain why certain topics are or should be off-limits to science, except to infer that it’s obvious. Does he feel, for example, that religious belief should be off-limits to scientific analysis? If so, what do reflective non-religious people do with their puzzlement and wonder about such beliefs? And if it’s worth trying to get to the bottom of what cultural and psychological conditions bring about the neurological networking that disposes people to believe in a loving or vengeful omnipotent creator-being, it’s also worth trying to get to the bottom of other mind-sets that dispose people to behave in ways productive or counter-productive to their well-being. And the reason we’re interested isn’t just curiosity, for the point isn’t just to understand our human world, but to improve it.

Finally Pigliucci seems to confuse a lack of interest, among such people in his orbit as Neil deGrasse Tyson and Lawrence Krauss, in philosophy, especially as it pertains to science, with scientism. They’re surely two different things. It isn’t ‘scientism’ for a scientist to eschew a particular branch of philosophy any more than it is for her to eschew a different field of science from her own, though it might seem sometimes a bit narrow-minded. Of course, as a non-scientist and self-professed dilettante I’m drawn to those with a wide range of scientific and other interests, but I certainly recognise the difficulty of getting your head around quantum mechanical, legal, neurological, biochemical and other terminology (I don’t like the word ‘jargon’), when your own ‘rabbit hole’ is so fascinating and enjoyably time-consuming.

There are, of course, examples of scientists claiming too much for the explanatory power of their own disciplines, and that’s always something to watch for, but overall I think the ‘scientism’ claim is more abused than otherwise – ‘weaponised’ is the trendy term for it. And I think Pigliucci needs to be a little more skeptical of his own views about the limits of science.

Written by stewart henderson

May 26, 2019 at 3:09 pm

the self and its brain: free will encore

leave a comment »


yeah, right

so long as, in certain regions, social asphyxia shall be possible – in other words, and from a yet more extended point of view, so long as ignorance and misery remain on earth, books like this cannot be useless.

Victor Hugo, author’s preface to Les Miserables

Listening to the Skeptics’ Guide podcast for the first time in a while, I was excited by the reporting on a discovery of great significance in North Dakota – a gigantic graveyard of prehistoric marine and other life forms precisely at the K-T boundary, some 3000 kms from where the asteroid struck. All indications are that the deaths of these creatures were instantaneous and synchronous, the first evidence of mass death at the K-T boundary. I felt I had to write about it, as a self-learning exercise if nothing else.

But then, as I listened to other reports and talking points in one of SGU’s most stimulating podcasts, I was hooked by something else, which I need to get out of the way first. It was a piece of research about the brain, or how people think about it, in particular when deciding court cases. When Steven Novella raised the ‘spectre’ of ‘my brain made me do it’ arguments, and the threat that this might pose to ‘free will’, I knew I had to respond, as this free will stuff keeps on bugging me. So the death of the dinosaurs will have to wait.

The more I’ve thought about this matter, the more I’ve wondered how people – including my earlier self – could imagine that ‘free will’ is compatible with a determinist universe (leaving aside quantum indeterminacy, which I don’t think is relevant to this issue). The best argument for this compatibility, or at least the one I used to use, is that, yes, every act we perform is determined, but the determining factors are so mind-bogglingly complex that it’s ‘as if’ we have free will, and besides, we’re ‘conscious’, we know what we’re doing, we watch ourselves deciding between one act and another, and so of course we could have done otherwise.

Yet I was never quite comfortable about this, and it was in fact the arguments of compatibilists like Dennett that made me think again. They tended to be very cavalier about ‘criminals’ who might try to get away with their crimes by using a determinist argument – not so much ‘my brain made me do it’ as ‘my background of disadvantage and violence made me do it’. Dennett and other philosophers struck me as irritatingly dismissive of this sort of argument, though their own arguments, which usually boiled down to ‘you can always choose to do otherwise’ seemed a little too pat to me. Dennett, I assumed, was, like most academics, a middle-class silver-spoon type who would never have any difficulty resisting, say, getting involved in an armed robbery, or even stealing sweets from the local deli. Others, many others, including many kids I grew up with, were not exactly of that ilk. And as Robert Sapolsky points out in his book Behave, and as the Dunedin longitudinal study tends very much to confirm, the socio-economic environment of our earliest years is largely, though of course not entirely, determinative.

Let’s just run though some of this. Class is real, and in a general sense it makes a big difference. To simplify, and to recall how ancient the differences are, I’ll just name two classes, the patricians and the plebs (or think upper/lower, over/under, haves/have-nots).

Various studies have shown that, by age five, the more plebby you are (on average):

  • the higher the basal glucocorticoid levels and/or the more reactive the glucocorticoid stress response
  • the thinner the frontal cortex and the lower its metabolism
  • the poorer the frontal function concerning working memory, emotion regulation , impulse control, and executive decision making.

All of this comes from Sapolsky, who cites all the research at the end of his book. I’ll do the same at the end of this post (which doesn’t mean I’ve analysed that research – I’m just a pleb after all. I’m happy to trust Sapolski). He goes on to say this:

moreover , to achieve equivalent frontal regulation, [plebeian] kids must activate more frontal cortex than do [patrician] kids. In addition, childhood poverty impairs maturation of the corpus collosum, a bundle of axonal fibres connecting the two hemispheres and integrating their function. This is so wrong foolishly pick a poor family to be born into, and by kindergarten, the odds of your succeeding at life’s marshmallow tests are already stacked against you.

Behave, pp195-6

Of course, this is just the sort of ‘social asphyxia’ Victor Hugo was at pains to highlight in his great work. You don’t need to be a neurologist to realise all this, but the research helps to hammer it home.

These class differences are also reflected in parenting styles (and of course I’m always talking in general terms here). Pleb parents and ‘developing world’ parents are more concerned to keep their kids alive and protected from the world, while patrician and ‘developed world’ kids are encouraged to explore. The patrician parent is more a teacher and facilitator, the plebeian parent is more like a prison guard. Sapolsky cites research into parenting styles in ‘three tribes’: wealthy and privileged; poorish but honest (blue collar); poor and crime-ridden. The poor neighbourhood’s parents emphasised ‘hard defensive individualism’ – don’t let anyone push you around, be tough. Parenting was authoritarian, as was also the case in the blue-collar neighbourhood, though the style there was characterised as ‘hard offensive individualism’ – you can get ahead if you work hard enough, maybe even graduate into the middle class. Respect for family authority was pushed in both these neighbourhoods. I don’t think I need to elaborate too much on what the patrician parenting (soft individualism) was like – more choice, more stimulation, better health. And of course, ‘real life’ people don’t fit neatly into these categories, there are an infinity of variants, but they’re all determining.

And here’s another quote from Sapolsky on research into gene/environment interactions.

Heritability of various aspects of cognitive development is very high (e.g. around 70% for IQ) in kids from [patrician] families but is only around 10% in [plebeian] kids. Thus patrician-ness allows the full range of genetic influences on cognition to flourish, whereas plebeian settings restrict them. In other words, genes are nearly irrelevant to cognitive development if you’re growing up in awful poverty – poverty’s adverse affects trump the genetics.

Behave, p249

Another example of the huge impact of environment/class, too often underplayed by ivory tower philosophers and the silver-spoon judiciary.

Sapolsky makes some interesting points, always research-based of course, about the broader environment we inhabit. Is the country we live in more communal or more individualistic? Is there high or low income inequality? Generally, cultures with high income inequality have less ‘social capital’, meaning levels of trust, reciprocity and cooperation. Such cultures/countries generally vote less often and join fewer clubs and mutual societies. Research into game-playing, a beloved tool of psychological research, shows that individuals from high inequality/low social capital countries show high levels of bullying and of anti-social punishment (punishing ‘overly’ generous players because they make other players look bad) during economic games. They tend, in fact, to punish the too-generous more than they punish actual cheaters (think Trump).

So the determining factors into who we are and why we make the decisions we do range from the genetic and hormonal to the broadly cultural. A couple have two kids. One just happens to be conventionally good-looking, the other not so much. Many aspects of their lives will be profoundly affected by this simple difference. One screams and cries almost every night for her first twelve months or so, for some reason (and there are reasons), the other is relatively placid over the same period. Again, whatever caused this difference will likely profoundly affect their life trajectories. I could go on ad nauseam about these ‘little’ differences and their lifelong effects, as well as the greater differences of culture, environment, social capital and the like. Our sense of consciousness gives us a feeling of control which is largely illusory.

It’s strange to me that Dr Novella seems troubled by ‘my brain made me do it’, arguments, because in a sense that is the correct, if trivial, argument to ‘justify’ all our actions. Our brains ‘make us’ walk, talk, eat, think and breathe. Brains R Us. And not even brains – octopuses are newly-recognised as problem-solvers and tool-users without even having brains in the usual sense – they have more of a decentralised nervous system, with nine mini-brains somehow co-ordinating when needed. So ‘my brain made me do it’ essentially means ‘I made me do it’, which takes us nowhere. What makes us do things are the factors shaping our brain processes, and they have nothing to do with ‘free will’, this strange, inexplicable phenomenon which supposedly lies outside these complex but powerfully determining factors but is compatible with it. To say that we can do otherwise is just saying – it’s not a proof of anything.

To be fair to Steve Novella and his band of rogues, they accept that this is an enormously complex issue, regarding individual responsibility, crime and punishment, culpability and the like. That’s why the free will issue isn’t just a philosophical game we’re playing. And lack of free will shouldn’t by any means be confused with fatalism. We can change or mitigate the factors that make us who we are in a huge variety of ways. More understanding of the factors that bring out the best in us, and fostering those factors, is what is urgently required.

just thought I’d chuck this in

Research articles and reading

Behave, Robert Sapolsky, Bodley Head, 2017

These are just a taster of the research articles and references used by Sapolsky re the above.

C Heim et al, ‘Pituitary-adrenal and autonomic responses to stress in women after sexual and physical abuse in childhood’

R J Lee et al ‘CSF corticotrophin-releasing factor in personality disorder: relationship with self-reported parental care’

P McGowan et al, ‘Epigenetic regulation of the glucocorticoid receptor in human brain associates with childhood abuse’

L Carpenter et al, ‘Cerebrospinal fluid corticotropin-releasing factor and perceived early life stress in depressed patients and healthy control subjects’

S Lupien et al, ‘Effects of stress throughout the lifespan on the brain, behaviour and cognition’

A Kusserow, ‘De-homogenising American individualism: socialising hard and soft individualism in Manhattan and Queens’

C Kobayashi et al ‘Cultural and linguistic influence on neural bases of ‘theory of mind”

S Kitayama & A Uskul, ‘Culture, mind and the brain: current evidence and future directions’.

etc etc etc

Written by stewart henderson

April 23, 2019 at 10:53 am

on anthropomorphism and human specialness

with 3 comments

Chimps gather to mourn the death of an elder

Recently I got into a bit of a barney with a friend who mocked the Great God David Attenborough for talking, in one of his whispered jungle monologues beside some exotic creatures or other, of the ‘mummy’ creature doing this and the ‘daddy’ creature doing that. My friend was slightly pissed off at this ‘anthropomorphism’. What followed is best dismissed as the insidious effects of too much jungle juice and jungle-jangle jazz, but the issue strikes me as an important one, so I’ll examine it further here.

There was a time when ethologists – those who study the behaviour of non-human animals – considered anthropomorphism a giant no-no. To describe an organism as he or she was (or seemed) to ascribe personhood to it, and clearly only humans can be persons. This was unscientific, and kind of soft. After all, ‘animals’ are driven by instinct, whereas humans make conscious decisions. They deliberate, they confer, they worry, they grieve, they organise, they invent, and they have a highly developed prefrontal cortex lacking in other species. And because they have a sophisticated Theory of Mind, they have fun attributing such mental attributes to their pets – ‘my dog Peaches understands every word I say/loves playing with my iPad/always helps me with the gardening’. Scientists of course eschewed such fluffiness in their research, while recognising that anthropomorphism will always be with us, as a type of human failing.

When Jane Goodall began publishing pioneering papers on chimp behaviour in Tanzania in the 1960s, she was quickly accused of anthropomorphism, ‘the cardinal sin of ethology’, but the impact of her work, together with that of other women in the field such as Diane Fossey on gorillas and Birutė Galdikas on orang-utans, was so transformative that it not only changed attitudes toward anthropomorphism but helped overturn the dominant paradigm in ethology and human psychology – behaviourism. And I don’t think the fact that these were all women was coincidental.

What Goodall et al were describing was complex social and family behaviour, driven by feelings – anger, fear, lust, shame and grief, to name a few. It was, in fact, nothing new. Darwin himself wrote The Expression of the Emotion in Man and Animals, in which he regularly used anthropomorphic terminology. However, the fact that it has now become more standard is due as much to neurological research as to field ethology. I’ve written elsewhere about bird brains, and the transformative and ongoing research into them. Research has also found that many bird species have extended family relationships. How do they recognise sisters and aunties when they all look the same? Maybe humans all look the same to a parrot (actually plenty of evidence says that they don’t). Neurological research into humans and non-human species is growing exponentially, and is quickly eroding the sense of our neurological specialness, which is a good thing. For example, in the bad old days, non-human primates were a regular subject of human research – or much more than they are now. It’s much easier to, say, remove part of a marmoset’s brain – sans anaesthetic – and observe its reactions if you’ve always referred to the creature as ‘it’ rather than ‘her’ or ‘him’, let alone as someone’s mother or daughter. But that’s exactly what they are. And they know it.

So why are some people still resistant to anthropomorphic terminology? It may be a religious hangover – most of the major religions make a sharp distinction between humans and brute beasts, and our language is full of these ‘human specialness’ distinctions, which we rarely notice. The term, ‘animal’ for example, standardly excludes the human animal. Since most of us can’t distinguish between a male and female bird of most species, we use the general term ‘it’, but if we’re presented with a new-born human animal we’re likely to inquire its gender so as not to use the insulting term I’ve just used.

Returning to the argument mentioned at the top of this post, the issue seemed to be that we shouldn’t use ‘mummy or ‘daddy’ etc to refer to non-human animals, that these terms are used for human relations, and should be used exclusively for that purpose. I can see no logic to this argument. Of course, birds don’t think of their parents as ‘mummy’ or ‘daddy’, but neither do they think of themselves as ‘birds’ or ‘oiseaux’ or ‘tori’ (Japanese). So if we refer to their relationships as ‘male parent’, ‘female parent’, and ‘offspring’, instead of mum, dad and the kids, that is just as much an imposition on them – a deliberately distancing imposition, emphasising our superiority – as the anthropomorphic terms.

One of the parties to this recent contretemps suggested we calm down, ‘it’s just a nomenclature issue’. Of course this is true, it’s all about nomenclature. And nomenclature can be really important – it can be racist, classist, as well as speciesist. The terms we use for other creatures can help to determine whether we see them as our friends or our dinner. In the meantime, continued neurological and ethological research will, I believe contribute further to the dissolution of the old rule against anthropomorphism, and the Great God Attenborough’s whispering tones will resonate through the firmament, as surely as mummy chimps mourn the loss of their babies.

Elephants live in multi-family groups over 70 years and develop strong, intimate bonds


Written by stewart henderson

February 5, 2019 at 9:29 am

Deep brain stimulation, depression and ways of thinking

leave a comment »

I read in a recent New Scientist that some progress has been made in using deep brain stimulation (DBS) to find associations between electrical brain activity and ‘mood’ or mood changes. This appears to mean that there’s an electrical ‘signal’ for happiness, sadness, anxiety, frustration, and any other emotion we can give a name to. And to paraphrase Karl Marx, the point is not to understand the brain, but to change it – at least for those who suffer depression, PTSD, bipolar disorder, epilepsy and a host of other debilitating disorders. So that we can all be happy clapping productive people…

So what is DBS and where is it heading? Apparently, electrodes can be implanted in specific brain regions to monitor, and in some cases actually change, ‘negative’ electrical activity. I use scare quotes here not to indicate opposition, but to highlight the obvious, that one person’s negativity may not be another’s, and that eliminating the negative also means eliminating the positive, as one means nothing without the other. Similar to the point that loving everyone means loving no-one. 

But I’m getting ahead of myself with these ethical matters. Here’s a simple overview of DBS from the Mayo Clinic:

Deep brain stimulation involves implanting electrodes within certain areas of your brain. These electrodes produce electrical impulses that regulate abnormal impulses. Or, the electrical impulses can affect certain cells and chemicals within the brain.
The amount of stimulation in deep brain stimulation is controlled by a pacemaker-like device placed under the skin in your upper chest. A wire that travels under your skin connects this device to the electrodes in your brain.

The most recent research translated neural signals into the mood variations of seven epilepsy sufferers who were fitted with implanted electrodes. The participants filled out periodic questionnaires about their mood, and clear matches were supposedly found between those self-reports and patterns of brain signals. Based on this knowledge, a decoder was built that would recognise particular signal patterns related to particular moods. It was successful in detecting mood 75% of the time. Brain patterns varied between participants, but were confined mainly to the limbic system, a network essential to triggering swings of emotion. 

I’m not sure if I should be overly impressed with a sample size of seven and a 75% success rate, but I do think that this research is on the right track, and there will be increasingly successful pinpointing of brain activity in relation to mood in the future, as well as other improvements, for example in the use of electrodes. Currently there’s an issue around the damaging long-term effects of implants, and non-invasive systems are being developed that can stimulate the brain from outside the skull. And of course there’s that next step, modulating those mood swings to ‘fix’ them, or to head them off at the pass.

All of this raises vital questions in relation to causes and treatments. If we focus on that most difficult but pervasive condition, depression, which so many people I know are medicating themselves against, it would seem that a brain-stimulation ‘cure’ would be less damaging than any course of anti-depressants, but it completely bypasses the question of why so many people are apparently suffering from this condition these days. Johann Hari has written a bestseller on depression, Lost Connections, which I haven’t read though I’ve just obtained a copy, and I’ve heard his long-form interview on Sam Harris’ Waking Up podcast. So I’ll probably revisit this issue more than once.

The medical establishment is more interested in treatment than in causes, and generally investigates causes only so as to refine treatments, but severe depression has proved difficult to treat other than with drugs which may have severe side-effects when used long-term. Clinicians have used terms such as treatment-resistant depression (TRD) and major depressive disorder (MDD) to characterise these conditions, which are on the rise worldwide, particularly in the more affluent nations. 

DBS first came to prominence as a promising treatment for movement disorders such as Parkinson’s disease and dystonia (which causes muscles to contract uncontrollably). It has since been used for more psycho-neurological ailments such as OCD, Tourette syndrome and severe, treatment-resistant addiction, with modest but statistically significant benefits. It has even shown promise in the treatment of some forms of dementia. Side-effects have been mostly confined to surgical procedures.

Clearly this type of treatment will improve with better targeting and increased knowledge of brain regions and their interactions, and in the case of MDD, which can be overwhelmingly debilitating, it offers much hope of a better life. But the question remains – why is depression increasing, and why in those countries that appear to offer a richer and more stimulating environment for their citizens? 

Hari’s title, Lost Connections, more than hints at his view of this, and in a recent conversation it was suggested to me that, in more subsistence societies, most people are too busy struggling to survive and keep their families alive and well to have the time to be depressed. This might seem a slap in the face to MDD sufferers (and I might add that the person making that suggestion is on anti-depressants), but surely there’s a grain of truth to it. I’ve often had travellers say to me ‘you should visit x, the people there have so little, yet they’re so happy and relaxed’. Is this a matter of ignorance being bliss? I recall, as a fifteen-year-old in one of the world’s most affluent and educated countries, wagging school and reading one of my brother’s economics textbooks – he was at university – and trying to get my head around the laws of supply and demand. It occurred to me that this might take years – but what about the other subjects that gripped me when I read about them? Astronomy, physics, ancient history, music, subjects that often had little to do with each other but which you could spend your whole lifetime immersed in. Not to mention other childhood ambitions that hadn’t been let go, to be a great sport star, or rock star, or latter-day Casanova…

This sense, cultivated in advanced societies, that you can achieve anything you set your mind to, can easily overwhelm when you’re faced with so many choices, and so many gaps in skill and knowledge between what you are and what you’d like to be, that it’s inevitable that sometimes you’ll feel flat, crushed by the weight of your own delirious hopes and expectations. This might be called a mood-swing, a symptom of depression, or even of bipolar disorder. All effort to climb that mighty mountain seems fruitless. The very thought of it sends you back to bed.

Such moods have overtaken me many times, but I’ve never called myself depressed, at least not in a clinical sense, and never sought medical advice or taken anti-depressant medication. I’ve occasionally been pressured to do so, because misery likes company, but I have a kind of basic stoicism which knows these moods will pass and that I should ‘rise above myself and grasp the world’ – a quote said to be from Archimedes, which is the new subtitle of my blog. 

The point here is that I think I have a sense of where all this depression is coming from, and it’s not just about a lack of connection. Nor is it, surely, all about low serotonin levels, or receptor malfunctions or other purely chemical causes. It’s so much more complicated than that. That’s to say it’s about all of these things but also about failure, the gap between the ideal and the real, the gap – in advanced countries – between the privileged rich and the disadvantaged poor, disillusionment, stress, grief, selfishness, the hope deferred that makes the heart sick…

So – back to DBS. Presumably this and other treatments have the same measure of success, which might be described as ‘improved functionality within the wider world’. Being able to hold down a job, hold a conversation, hold on to your partner, hold a baby without dropping it, etc. Of course, this is a worthwhile aim of any treatment, but what is actually happening to the brain under such a treatment? Neurologists might one day be able to describe this effectively in terms of dopamine levels and electrical activity, and the stimulation or becalming of regions of the nucleus accumbens and so forth, but on the level of thinking, dreaming, wondering, all those terms studiously avoided, or just ignored, by neurology (all for understandable reasons), what is happening? We don’t know. Treatment seems essentially a matter of dealing with functionality in the external world, and letting that inner world take care of itself. Is that the right approach? Something gained, but something lost? I really don’t know. 

Written by stewart henderson

December 18, 2018 at 2:27 pm

on luck, and improving environments

leave a comment »

Trump wasn’t born here, and neither was I

I’m in the process of reading Behave, by Robert Sapolsky, a professor of neurology and biology at Stanford University, who has tried in his book to summarise, via the research literature, the seconds, then minutes, then hours, then days, then lifetimes and more, that precede any particular piece of behaviour. It’s a dense but fascinating book, which aligns with, and provides mountains of evidence for, my view that we’re far less in control of ourselves than we think.

It seems we think this because of what might be called conscious awareness of our behaviours and our decisions. This consciousness is something we sometimes mistake for control. It’s interesting that we consider it obvious that we have no control over the size of our nose or the colour of our eyes, but we have more or less complete control of our temper, appetites, desires and ambitions. 

 Humanistically speaking, this understanding about very limited control needs to have massive implications for our understanding of others. We don’t get to choose our parents, our native country or the immediate environment that most profoundly affects our early life and much of our subsequent behaviour. The flow of hormones and neurotransmitters and their regulation via genetic and epigenetic factors proceed daily, hourly, moment by moment, and all we’re aware of, essentially, is outcomes. 

A lot of people, I note, are very uncomfortable about this kind of talk. For example, many of us want to treat each other as ‘equal before the law’. But is one person ever ‘equal’ with another? We know – it’s obvious – that we’re all different. That’s how we distinguish people, by their smiles, their voices, their fingerprints, their DNA. So how can we be different and equal at the same time? Or, to turn things around, how can a legal system operate if everyone is treated as different, unique, a special case?

Well, in a sense, we already do this, with respect to the law. No two bank robberies, or rapes, or murders are the same, and the judiciary must be highly attuned to the differences when applying punishments. Nowadays, and increasingly, the mental state of the offender – particularly at the time of the offence, if that can be ascertained – is considered when sentencing.  And this is surely a good thing. 

The question here is, considering the exponential growth of our neurophysiological knowledge in the 21st century, and its bearing on our understanding of every kind of negative or positive behaviour we engage in, how can we harness that knowledge to improve outcomes and move from a punitive approach to bad behaviours to something more constructive?

Of course, it’s one thing to identify the release or suppression of glucocorticoids, for example, and its effect on person x’s cognitive faculties, it’s entirely another thing to effect a remedy. And to what effect? To make everyone docile, ‘happy’ and law-abiding? To have another go at eugenics, this time involving far more than just genes? 

One of the points constantly hammered home in Sapolsky’s book is the effect of environment on everything that goes on inside us, so that, for example, genes aren’t quite as determinative as we once thought. Here are some key points from his chapter on genes (with apologies about unexplained terms such as epigenetic, transcription and transposons):

a. Genes are not autonomous agents commanding biological events.

b. Instead genes are regulated by the environment, with environment consisting of everything from events inside the cell to the universe.

c. Much of your DNA turns environmental influences into gene transcription, rather than coding for genes themselves; moreover, evolution is heavily about changing regulation of gene transcription, rather than genes themselves.

d. Epigenetics can allow environmental effects to be lifelong, or even multigenerational.

e. And thanks to transposons, neurons contain a mosaic of different genomes. 

And genes are only one component of the array of forces that influence or control our behaviour. We know, or course, about how Phineas Gage-type accidents and brain tumours can alter behaviour, but many other effects on the brain can alter our behaviour without us and others knowing too much about it. These include stress, malnutrition, and long-term cultural and religious influences which permanently affect our attitudes to, for example, women, other species and the food we eat. Domestic violence, drug use, political affiliations, educational outcomes and sexual affinities are all more inter-generational than we’re generally prepared to admit. 

The first thing we need to do is be aware of all this in our judgment of others, and even of ourselves. There’s just so much luck involved in being who we are. We could’ve been more or less ‘good-looking’ than we are -according to the standards of the culture around us – and this would’ve affected the way we’ve been treated throughout our whole lives. We could’ve been born richer or poorer, with more or less dysfunctional parents, taller or shorter, more or less mentally agile, more or less immune to the pathogens that surround us. On and on and on we could go, even to an extreme degree. We could’ve been born in Algeria, Argentina or Azerbaijan. We could’ve been born in 1912, 1412 or 512, or 150,000 years ago. We could’ve been born a mongoose, a mouse or a mosquito. It’s all luck, whether good or bad is up to us to decide, but probably not worth speculating about as we have no choice but to make the best of what we are.

What we do have is consciousness or awareness of what we are. And with that consciousness we can speculate, as we as a species always have, on how to make the best of ourselves, given that we’re the most socially constructed mammalian species on the planet, and for that reason the most successful, measured by population, spread across the globe, and what we’ve done for ourselves in terms of social evolution – our science, our technology, our laws and our politics.  

That’s where humanism comes in, for me. Since we know that ‘there but for the randomness of luck go I’, it surely follows that we should sympathise with those whose luck hasn’t been as lucky as our own, and strive to improve the lot of those less fortunate. Safe havens, educational opportunities, decent wages, human rights, clean environments, social networks – we know what’s required for people to thrive. Yet we focus, I think, too much on punishment. We punish people for trying to improve their family’s situation – or to avoid obliteration – by seeking refuge in safer, richer, healthier places. We punish them for seeking solace in drugs because their circumstances are too overwhelming to deal with. We punish them for momentary and one-off lapses of concentration that have had dire consequences. Of course it has always been thus, and I think we’re improving, though very unevenly across the globe. And the best way to improve is by more knowing. And more understanding of the consequences of that knowledge. 

Currently, it seems to me, we’re punishing people too much for doing what impoverished, damaged, desperate people do to survive. It’s understandable, perhaps, in our increasingly individualist world. How dare someone bother me for handouts. It’s not my fault that x has fucked up his life. Bring back capital punishment for paedophiles. People smugglers are the lowest form of human life. Etc etc – mostly from people who don’t have a clue what it’s like to be those people. Because their life is so different, through no fault, or cause, of their own. 

So to me the message is clear. Out lives would be better if others’ lives were better – if we could give others the opportunities, the health, the security and the smarts that we have, and if we could have all of those advantages that they have. I suppose that’s kind of impossible, but it’s better than blaming and punishing, and feeling superior. We’re not, we’re just lucky. Ot not. 

  

Written by stewart henderson

December 4, 2018 at 2:22 pm

another look at free will, with thanks to Robert Sapolsky

with 11 comments

Ah poor old Aynnie – from guru to laughing stock within a couple of gens

Having recently had a brief conversation about free will, I’ve decided to look at the matter again. Fact is, it’s been playing on my mind. I know this is a very old chestnut in philosophy, renewed somewhat by neurologists recently, and I know that far more informed minds than mine have devoted oodles of time and energy to it, but my conversation was with someone with no philosophical or neurological background who simply found the idea of our having no free will, no autonomy, no ‘say’ whatever in our lives, frankly ludicrous. Free will, after all, was what made our lives worth living. It gives us our dignity, our self-respect, our pride in our achievements, our sense of shame or disappointment at having made bad or unworthy decisions. To deny us our free will would deny us….  far far too much.

My previous piece on the matter might be worth a look (having just reread it, it’s not bad), but it seems to me the conundrum can be made clear by thinking in two intuitively obvious but entirely contradictory ways. First, of course we have free will, which we demonstrate with a thousand voluntary decisions made every day – what to wear, what to eat, what to watch, what to read, whether to disagree or hold our tongue, whether to turn right or left in our daily walk, etc etc. Second, of course we don’t have free will – student A can’t learn English as quickly and effectively as student B, no matter how well you teach her; this student has a natural ability to excel at every sport, that one is eternally clumsy and uncoordinated; this girl is shy and withdrawn, that one’s a noisy show-off, etc etc.

The first way of thinking comes largely from self-observation, the second comes largely from observing others (if only others were as free to be like us as we are). And it seems to me that most relationship breakdowns come from 1) not allowing the other to be ‘free’ to be themselves, or 2) not recognising the other’s lack of freedom to change. Take your pick.

So I’ve just read Robert Sapolsky’s take on free will in his book Behave, and it strengthens me in my ‘free will is a myth’ conviction. Sapolsky somewhat mocks the free will advocates with the notion of an uncaused homunculus inside the brain that does the deciding with more or less good sense. The point is that ‘compatibilism’ can’t possibly make sense. How do you sensibly define ‘free will’ within a determinist framework? Is this compatibilism just a product of the eternal complexity of the human brain? We can’t tease out the chain of causal events, therefore free will? So if at some future date we were able to tease out those connections, free will would evaporate? As Sapolsky points out, we are much further along at understanding the parts of the prefrontal cortex and the neuronal pathways into and out of it, and research increases exponentially. Far enough along to realise how extraordinarily far we have to go. 

One way of thinking of the absurdity of the self-deciding self is to wonder when this decider evolved. Is it in dogs? Is it in mosquitos? The probable response would be that dogs have a partial or diminished free will, mosquitos much less so, if at all. As if free will was an epiphenomen of complexity. But complexity is just complexity, there seems no point in adding free will to it. 

But perhaps we should take a look at the best arguments we can find for compatibilism or any other position that advocates free will. Joachim Krueger presents five arguments on the Psychology Today website, though he’s not convinced by any of them. The second argument relates to consciousness (a fuzzy concept avoided by most neurologists I’ve read) and volition, a tricky concept that Krueger defines as ‘will’ but not free will. Yes, there are decisions we make, which we may weigh up in our minds, to take an overseas holiday or spend a day at the beach, and they are entirely voluntary, not externally coerced – at least to our minds. However, that doesn’t make them free, outside the causal chain. But presumably compatibilists will agree – they are wedded to determinism after all. So they must have to define freedom in a different way. I’ve yet to find any definition that works for the compatibilist.

There’s also a whiff of desperation in trying to connect free will with quantum indeterminacy, as some have done. Having read Life at the edge, by Jim Al-Khalili and Johnjoe McFadden, which examines the possibilities of quantum effects at the biological level, I’m certainly open to the science on this, but I can’t see how it would apply at the macro level of human decision-making. And this macro level is generally far more ‘unconscious’ than we have previously believed, which is another way of saying that, with the growth of neurology (and my previous mention of exponential growth in this field is no exaggeration), the mapping of neurological activity, the research into neurotransmission and general brain chemistry, the concept of ‘consciousness’ has largely been ignored, perhaps because it resembles too much the homunculus that Sapolsky mocks. 

As Sapolsky quite urgently points out, this question of free will and individual responsibility is far from being the fun and almost frolicsome philosophical conundrum that some have seemed to suggest. It has major implications for the law, and for crime and punishment. For example, there are legal discussions in the USA, one of the few ‘civilised’ nations that still execute people, as to the IQ level above which you’re smart enough to be executed, and how that IQ is to be measured. This legal and semi-neurological issue affects a significant percentage of those on death row. A significant percentage of the same people have been shown to have damage to the prefrontal cortex. How much damage? How did this affect the commission of the crime? Neurologists may not be able to answer this question today, but future neurologists might. 

So, for me, the central issue in the free will debate is the term ‘free’. Let’s look at how Marvin Edwards describes it in his blog post ‘Free will skepticism: an incoherent notion’. I’ve had a bit of a to-and-fro with Marvin – check out the comments section on my previous post on the topic, referenced below. His definition is very basic. For a will, or perhaps I should say a decision, to be free it has to be void of ‘undue influences’. That’s it. And yet he’s an out and out determinist, agreeing that if we could account for all the ‘influences’, or causal operants, affecting a person’s decision, we could perfectly predict that decision in advance. So it is obvious to Marvin that free will and determinism are perfectly compatible.

That’s it, I say again. That’s the entire substance of the argument. It all hangs on this idea of ‘undue influence’, an idea apparently taken from standard philosophical definitions of free will. Presumably a ‘due influence’ is one that comes from ‘the self’ and so is ‘free’. But this is an incoherent notion, to borrow Marvin’s phrase. Again it runs up against Sapolsky’s homunculus, an uncaused decider living inside the brain, aka ‘the self’. Here’s what Sapolsky has to say about the kind of compatibilism Marvin is advocating for, which he (Sapolsky) calls ‘mitigated free will’, a term taken from his colleague Joshua Greene. It’s a long quote, but well worth transcribing, as it captures my own skepticism as exactly as anything I’ve read:

Here’s how I’ve always pictured mitigated free will:

There’s the brain – neurons, synapses, neurotransmitters, receptors, brain-specific transcription factors, epigenetic effects, gene transpositions during neurogenesis. Aspects of brain function can be influenced by someone’s prenatal environment, genes, and hormones, whether their parents were authoritarian or their culture egalitarian, whether they witnessed violence in childhood, when they had breakfast. It’s the whole shebang, all of this book.

And then, separate from that, in a concrete bunker tucked away in the brain, sits a little man (or woman, or agendered individual), a homunculus at a control panel. The homunculus is made of a mixture of nanochips, old vacuum tubes, crinkly ancient parchment, stalactites of your mother’s admonishing voice, streaks of brimstone, rivets made out of gumption. In other words, not squishy biological brain yuck.

And the homunculus sits there controlling behaviour. There are some things outside its purview – seizures blow the homunculus’s fuses, requiring it to reboot the system and check for damaged files. Same with alcohol, Alzheimer’s disease, a severed spinal cord, hypoglycaemic shock. 

There are domains where the homunculus and that biology stuff have worked out a détente – for example, biology is usually automatically regulating your respiration, unless you must take a deep breath before singing an aria, in which case the homunculus briefly overrides the automatic pilot.

But other than that, the homunculus makes decisions. Sure, it takes careful note of all the inputs and information from the brain, checks your hormone levels, skims the neurobiology journals, takes it all under advisement, and then, after reflecting and deliberating, decides what you do. A homunculus in your brain, but not of it, operating independently of the material rules of the universe that constitute modern science.

This captures perfectly, to me, the dilemma of those sorts of compatibilists who insist on determinism but. They seem more than reluctant to recognise the implications of that determinist commitment. It’s an amusing description – I love the bit about the aria – But it seems to me just right. As to the implications for our cherished sense of freedom, we can at least reflect that it has ever been thus, and it hasn’t stopped us thriving in our selfish, selfless ways. But as to the implications for those of us less fortunate in the forces that have moved us since childhood and before, that’s another story.

References

https://ussromantics.com/2018/05/15/is-free-will-a-thing-apparently-not/

R Sapolsky, Behave: the biology of humans at our best and worst, Bodley Head 2017. Note especially Chapter 16, ‘Biology, the criminal justice system and free will’. 

https://plato.stanford.edu/entries/compatibilism/#FreWil

https://www.psychologytoday.com/au/blog/one-among-many/201803/five-arguments-free-will

https://www.theatlantic.com/notes/2016/06/free-will-exists-and-is-measurable/486551/

Written by stewart henderson

October 27, 2018 at 1:25 pm