Posts Tagged ‘neurology’
free will, revisited

yet to be read
I’ve written about free will before, here , and especially here, (the commentary at the end is particularly interesting, IMHO), and probably in other posts as well, but I’ve been thinking about it a lot lately, so maybe it’s time for a refresher (though, if I say so myself, those earlier posts stand up pretty well).
I first became acquainted with and absorbed in the ‘philosophical’ argy-bargy about free will way back in the seventies, when I read Free Will and Determinism, a collection of essays edited by Bernard Berofsky. It was published in 1966, and is, amazingly (since I’ve moved house about 50 times), still in my possession. Glancing through it again now brings back memories, but more importantly, the arguments, which mostly favour compatibilism, aka soft determinism, seem both naive and somewhat arrogant, if that’s the word. That is, they’re mostly variants of ‘of course we have free will – we display it in every decision we make – but many of us find it hard to present a rational explanation of it, so I’ll do it for you’. Only one philosopher, from memory, John Hospers, argued for ‘hard determinism’, that’s to say, for the absence of free will. And though I found his argument a bit clunky (it was largely based on Freudian and neo-Freudian psychology), it was the only one that really stuck in my mind, though I didn’t quite want to be convinced.
In more recent years, after reading Sam Harris’ short book on free will, and Robert Sapolsky’s treatment of the issue towards the end of his monumental book Behave, I’ve felt as if the scales have dropped from my eyes. Another factor I should mention was a talk I gave to the SA Humanist Society a few years ago on the subject, which didn’t quite go all the way on ‘no free will’, and a pointed question from one of the attendees left me floundering for a response. It was likely that experience that made me feel the need to revisit the issue more comprehensively. So, for memory lane’s sake, I’m going to reread these old essays and then comment on them. And hopefully I’ll be able to slip in a bonobo mention along the way!
I should mention, as Sapolsky does in Behave, that neurology has come a long way since the 1970s. More papers have been published in the field in the first two decades of the 21st century than in all the centuries before, which is hardly surprising. With this, and our greater understanding of genetics, epigenetics. developmental psychology and other fields relevant to the topic, it will behoove me to be fair to the thinking of intellectuals writing a number of generations before the present. However, I’m not interested in giving a historical account – how Cicero, or Augustine of Hippo, or Spinoza, or John Stuart Mill conceptualised the problem was very much a product of the zeitgeist of their era, combined with their unique gifts. The era I live in, in the particularly WEIRD country (Australia) that is my home, religion is fast receding, and the sciences of neurophysiology, endocrinology, genetics and primatology, among others, have revolutionised our understanding of what it is to be human, or sentient, or simply alive. And they help us to understand our uniquely determined situation and actions.
So let me begin with Berofsky’s introduction, in which he raises a ‘problem’ with determinism:
The fact that classical mechanics did not turn out to be the universal science of human nature suggests that contemporary proponents of determinism do not ally themselves to this particular theory. Many ally themselves to no particular theory at all, but try to define determinism in such a way that its rejection is not necessitated by the rejection of any particular scientific theory.
This takes us back to the effect upon the general public of such notions as ‘quantum indeterminacy’ and its manipulation by pedlars of ‘quantum woo’ (for example, The tao of physics, by Fritjof Capra, which I haven’t read). But clearly, however we might understand quantum superposition and action-at-a-distance, they have no effect at the macro level of brain development, genetic inheritance and the like, and they certainly can’t be used to defend the concept of free will. The ‘no free will’ argument does rely on determining factors, and openly so. Our genetic inheritance, the time and place of our birth, our family circumstances, our ethnicity, our diet, these are among many influences that we don’t see as ‘theoretical’, but factual.
Berofsky goes on to worry over types of causes and causal laws in what seems to me a rather fruitless ‘philosophical’ way.
A determinist, then, is a person who believes that all events (facts, states) are lawful in the sense, roughly, that for event e, there is a distinct event d plus a (causal) law which asserts, ‘Whenever d, then e’.
The extremely general or universal character of this thesis has raised many questions, some of which concern the status of the thesis. Some have held the position as a necessary or a priori truth about the world. Others have insisted that determinism is itself a scientific theory, but much more general than most other scientific theories.
As you can imagine, none of this is of any concern to a working neurologist, biochemist or primatologist. In trying to determine how oxytocin levels affect behaviour in certain subjects, for example, they won’t be reflecting on a priori truths or causal laws, they’ll be looking at all the other possible confounding and co-determining factors that might contribute to the behaviour. It seems to me that traditional philosophical language is getting in the way here of attributing effects to causes, however partially.
Berofsky points out, in the name of some philosophers, that determinism isn’t a scientific theory in that it’s essentially unfalsifiable (my language, not his), as it can always be claimed that some so far undiscovered causal factor has contributed to the behaviour or effect. But scientists don’t consider determinism to be a theory, but rather the sine qua non of scientific practice, indeed of everyday life. We live in a world of becauses, we eat x because we’re hungry/it’s tasty/it’s healthy/it reminds us of childhood, etc. We don’t think like this in terms of laws. We needn’t think of it at all, just as a dog wags her tail when she sees her owner after a long absence (or not, if he’s also her abuser).
So much for determinism, over which too much verbiage has been employed. The real issue that exercises most people is free will, freedom, or agency. Here’s how Berofsky introduces the subject:
It has been maintained that if an action is determined, then the person was not performing the action of his own free will. For surely, it is argued, if the antecedent conditions are such that they uniquely determine by law the ensuing result (the action), then it was not within the power of the person to do otherwise. And a person does A freely if, and only if, he could have done something other than A. Let us call this position ‘incompatibilism’. Incompatibilists usually conclude as well that if a person’s action is determined, then he is not morally responsible for having done it, since acting freely is a necessary condition of being morally responsible for the action.
This is a long-winded, i.e. typically philosophical way of putting the ‘no free will’ argument, which is usually countered by an ‘of course I could’ve done otherwise’ response, and the accusation that determinists are not just kill-joys but kill-freedoms. Presumably this would be a ‘compatibilist’ response, and many find it the only common-sense response, if we want to view ourselves as anything other than automatons.
But there are obvious problems with compatibilism, and here’s my ‘death by a thousand cuts’ response. There are a great many Big Things in our life about which we, indisputably, have no choice. No person, living or dead, got to choose the time and place of their birth, or conception. No person got to choose their parents, or their genetic inheritance. They had no choice as to how their brain, limbs, organs and so forth grew and developed whilst in the womb. So, no freedom of choice up to that time. When, then, did this freedom begin? The compatibilist would presumably argue – ‘when we make our own observations and inferences, which starts to happen more frequently as we grow’. And there would be much hand-waving about when this gradually starts to happen, until we’re our own autonomous selves, who could’ve done otherwise. And here we get to the response of Sam Harris and others, that this ‘self’ is a myth. I would put it differently, that the self is a useful marker for each person and their individuality. These selves are all determined, but they’re each uniquely determined, and at least this uniqueness is something we can salvage from the firm grip of determinism. What is mythical about the self is its self-determined nature.
As Berofsky puts it, guilt and remorse are strong indications, for compatibilists, that free will exists. I would add regret to those feelings, and I would admit, as does Sapolsky, that these strong, sometimes overwhelming feelings, based largely on the idea that we should have done otherwise, are our strongest arguments for rejecting the no free will position.
This issue of guilt needs to be looked at more closely, since our whole legal system is based on questions of guilt or innocence. I’ll reserve that for next time.
References
Bernard Berofsky, ed. Free will and determinism, 1966
Robert Sapolsky, Behave: the biology of humans at our best and worst, 2017
Sam Harris, Free will, 2012
language origins: some reflections

smartmouth
Jacinta: So a number of readings and listenings lately have caused us to think about how the advent of language would have brought about something of a revolution in human society – or any other society, here or on any other planet out there.
Canto: Yes, we heard about orangutan kiss-squeaks on a New Scientist podcast the other day, and we’re currently reading Rebecca Wragg Sykes’ extraordinary book Kindred, a thoroughly comprehensive account of Neanderthal culture, which we’ve clearly learned so much more about in recent decades. She hasn’t really mentioned language as yet (we’re a little over halfway through), but the complexity and sophistication she describes really brings the subject to mind. And of course there are cetacean and bird communications, inter alia.
Jacinta: So how do we define a language?
Canto: Yeah, we need to define it in such a way that other creatures can’t have it, haha.
Jacinta: Obviously it evolved in a piece-meal way, hence the term proto-language. And since you mentioned orangutans, here’s a quote from a 2021 research paper on the subject:
Critically, bar humans, orangutans are the only known great ape to produce consonant-like and vowel-like calls combined into syllable-like combinations, therefore, presenting a privileged hominid model for this study.
And what was the study, you ask? Well, quoting from the abstract:
… we assessed information loss in proto-consonants and proto-vowels in human pre-linguistic ancestors as proxied by orangutan consonant-like and vowel-like calls that compose syllable-like combinations. We played back and re-recorded calls at increasing distances across a structurally complex habitat (i.e. adverse to sound transmission). Consonant-like and vowel-like calls degraded acoustically over distance, but no information loss was detected regarding three distinct classes of information (viz. individual ID, context and population ID). Our results refute prevailing mathematical predictions and herald a turning point in language evolution theory and heuristics.
Canto: So, big claim. So these were orangutan calls. I thought they were solitary creatures?
Jacinta: Well they can’t be too solitary, for ‘the world must be orangutan’d’, to paraphrase Shakespeare. And interestingly, orangutans are the most tree-dwelling of all the great apes (including us of course). And that means a ‘structurally complex habitat’, methinks.
Canto: So here’s an even more recent piece (December 2022) from ScienceDaily:
Orangutans’ tree-dwelling nature means they use their mouth, lips and jaw as a ‘fifth hand’, unlike ground-dwelling African apes. Their sophisticated use of their mouths, mean orangutans communicate using a rich variety of consonant sounds.
Which is interesting in that they’re less close to us genetically than the African apes. So this research, from the University of Warwick, focused a lot on consonants, which until recently seemed quintessentially human productions. Researchers often wondered where these consonants came from, since African apes didn’t produce them. Their ‘discovery’ in orangutans has led, among other things, to a rethinking re our arboreal past.
Jacinta: Yes, there’s been a lot of focus recently on vowel and consonant formation, and the physicality of those formations, the muscles and structures involved.
Canto: Well in this article, Dr Adriano Lameira, a professor of psychology who has long been interested in language production, and has been studying orangutans in their natural habitat for 18 years, notes that their arboreal lifestyle and feeding habits have enabled, or in a sense forced, them to use their mouths as an extra appendage or tool. Here’s how Lameira puts it:
It is because of this limitation, that orangutans have developed greater control over their lips, tongue and jaw and can use their mouths as a fifth hand to hold food and manoeuvre tools. Orangutans are known for peeling an orange with just their lips so their fine oral neuro-motoric control is far superior to that of African apes, and it has evolved to be an integral part of their biology.
Jacinta: So they might be able to make more consonantal sounds, which adds to their repertoire perhaps, but that’s a long way from what humans do, putting strings of sounds together to make meaningful ‘statements’. You know, grammar and syntax.
Canto: Yes, well, that’s definitely going to the next level. But getting back to those kiss-squeaks I mentioned at the top, before we get onto grammar, we need to understand how we can make all the sounds, consonantal and vowel, fricative, plosive and all the rest. I’ve found the research mentioned in the New Scientist podcast just the other day, which compares orangutan sounds to human beatboxing (which up till now I’ve known nothing about, but I’m learning). Dr Lameira was also involved in this research, So I’ll quote him:
“It could be possible that early human language resembled something that sounded more like beatboxing, before evolution organised language into the consonant — vowel structure that we know today.”
Jacinta: Well that’s not uninteresting, and no doubt might fit somewhere in the origins of human speech, the details of which still remain very much a mystery. Presumably it will involve the development of distinctive sounds and the instruments and the musculature required to make them, as well as genes and neural networks – though that might be a technical term. Neural developments, anyway. Apparently there are ‘continuity theories’, favouring gradual development, probably over millennia, and ‘discontinuity theories’, arguing for a sudden breakthrough – but I would certainly favour the former, though it might have been primarily gestural, or a complex mixture of gestural and oral.
Canto: You’d think that gestural, or sign language – which we know can be extremely complex – would develop after bipedalism, or with it, and both would’ve evolved gradually. And, as we’re learning with Neanderthals, the development of a more intensive sociality could’ve really jump-started language processes.
Jacinta: Or maybe H sapiens had something going in the brain, or the genes, language-wise or proto-language-wise, that gave them the competitive advantage over Neanderthals? And yet, reading Kindred, I find it hard to believe that Neanderthals didn’t have any language. Anyway, let’s reflect on JuLingo’s video on language origins, in which she argues that language was never a goal in itself (how could it be), but a product of the complexity that went along with bipedalism, hunting, tool-making and greater hominin sociality. That’s to say, social evolution, reflected in neural and genetic changes, as well as subtle anatomical changes for the wider production and reception of sounds, perhaps starting with H ergaster around 1.5 million years ago. H heidelbergensis, with a larger brain size and wider spinal canal, may have taken language or proto-language to another level, and may have been ancestral to H sapiens. It’s all very speculative.
Canto: Yes, I don’t think I’m much qualified to add anything more – and I’m not sure if anyone is, but of course there’s no harm in speculating. Sykes speculates thusly about Neanderthals in Kindred:
Complementary evidence for language comes from the fact Neanderthals seem to have had similar rates of handedness. Tooth micro-scratches and patterns of knapping on cores [for stone tool-making] confirm they were dominated by right-handers, and this is also reflected in asymmetry in one side of their brains. But when we zoom in further to genetics, things get increasingly thorny. The FOXP2 gene is a case in point: humans have a mutation that changed just two amino acids from those in other animals, whether chimps or platypi. FOXP2 is definitely involved with cognitive and physical language capacity in living people, but it isn’t ‘the’ language gene; no such thing exists. Rather it affects multiple aspects of brain and central nervous system development. When it was confirmed that Neanderthals had the same FOXP2 gene as us, it was taken as strong evidence that they could ‘talk’. But another, subtler alteration has been found that happened after we’d split from them. It’s tiny – a single protein – and though the precise anatomical effect isn’t yet known, experiments show it does change how FOXP2 itself works. Small changes like this are fascinating, but we’re far from mapping out any kind of genetic recipe where adding this, or taking away that, would make Neanderthals loquacious or laconic.
Rebecca Wragg Sykes, Kindred: Neanderthal life, love, death and art, pp 248-9
Jacinta: Yes, these are good points, and could equally apply to early H sapiens, as well as H ergaster and H heidelbergensis. Again we tend to think of language as the full-blown form we learn about in ‘grammar schools’, but most languages today have no written form, and so no fixed grammar – am I right?
Canto: Not sure, but I understand what you’re getting at. The first English grammar book, more like a pamphlet, was published in 1586, when Shakespeare was just starting out as a playwright, and, as with ‘correct’ spelling and pronunciation, would’ve been politically motivated – the King’s English and all.
Jacinta: Queen at that time. Onya Elizabeth. But the grammar, and the rest, would’ve been fixed enough for high and low to enjoy Shakespeare’s plays. And to make conversation pretty fluid.
Canto: Yes, and was handed down pretty naturally, I mean without formal schooling. It’s kids who create new languages – pidgins that become creoles – when necessity necessitates. I read that in a Scientific American magazine back in the early eighties.
Jacinta: Yes, so they had the genes and the neural equipment to form new hybrid languages, more or less unconsciously. So much still to learn about all this…
Canto: And so little time….
References
Kindred: Neanderthal life, love, death and art, by Rebecca Wragg Sykes, 2021
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8478518/
https://www.sciencedaily.com/releases/2022/12/221220112426.htm
https://www.sciencedaily.com/releases/2023/06/230627123117.htm
https://en.wikipedia.org/wiki/Origin_of_language
https://humanorigins.si.edu/evidence/human-fossils/species/homo-heidelbergensis
dyslexia is not one thing 4: the left and the right

a one-sided view (the left) of the parts of the brain involved in language and reading processing
Canto: So we’re still looking at automaticity, and it’s long been observed that dyslexic kids have trouble retrieving names of both letters and objects from age three, and then with time the problem with letters becomes more prominent. This means that there just might be a way of diagnosing dyslexia from early problems with object naming, which of course starts first.
Jacinta: And Wolf is saying that it may not be just slowness but the use of different neural pathways, which fMRI could reveal.
Canto: Well, Wolf suggests possibly the use of right-hemisphere circuitry. Anyway, here’s what she says re the future of this research:
It is my hope that future researchers will be able to image object naming before children ever learn to read, so that we can study whether the use of a particular set of structures in a circuit might be a cause or a consequence of not being able to adapt to the new task of literacy (Wolf, p181).
So that takes us to the next section: “An impediment in the circuit connections among the structures”.
Jacinta: Connections between. And if we’re talking about the two hemispheres, the corpus callosum could’ve provided a barrier, as it does with stroke victims…
Canto: Yes, connections within the overall reading circuit, which involves different parts of the brain, can be more important for reaching automaticity than the brain regions themselves, and a lot of neuroscientists are exploring this connectivity. Apparently, according to Wolf, three forms of disconnections are being focussed on by researchers. One is an apparent disconnection ‘between frontal and posterior language regions, based on underactivity in an expansive connecting area called the insula. This important region mediates between relatively distant brain regions and is critical for automatic processing’ (Wolf, p182). Another area of disconnection involves the occipital-temporal region, also known as Brodmann area 37, which is activated by reading in all languages. Normally, strong, automatic connections are created between this posterior region and frontal regions in the left hemisphere, but dyslexic people make connections between the left occipital-temporal area and the right-hemisphere frontal areas. It also seems to be the case that in dyslexics the left angular gyrus, accessed by good beginning readers, doesn’t effectively connect with other left-hemisphere language regions during reading and the processing of phonemes.
Jacinta: And it’s not just fMRI that’s used for neuro-imaging. There’s something called magnetoencephalography (a great word for dyslexics) – or MEG – that gives an ‘approximate’ account of the regions activated during reading, and using this tool a US research group found that children with dyslexia were using a completely different reading circuitry, which helps explain the underactivity in other regions observed by other researchers.
Canto: And leads to provocative suggestions of a differently arranged brain in some people. Which takes us to the last of the four principles: ‘a different circuit for reading’. In this section, Wolf begins by recounting the ideas of the neurologists Samuel T Orton and Anna Gillingham in the 1920s and 1930s. Orton rejected the term ‘dyslexia’, preferring ‘strephosymbolia’. Somehow it didn’t catch on, but essentially it means ‘twisted symbols’. He hypothesised that in the non-dyslexic, the left-hemisphere processes identify the correct orientation of letters and letter sequences, but in the dyslexic this identification was somehow hampered by a problem with left-right brain communication. And decades later, in the 70s this hypothesis appeared to be validated, in that tests on children in which they were given ‘dichotic tasks’ – to identify varied auditory signals presented to different ears – revealed that impaired readers didn’t use left-hemisphere auditory processes in the same way as average readers. Other research showed that dyslexic readers showed ‘right-hemisphere superiority’, by which I think is meant that they favoured the right hemisphere for tasks usually favoured by the left.
Jacinta: Yes, weakness in the left hemisphere for handling linguistic tasks. But a lot of this was dismissed, or questioned, for being overly simplistic. You know, the old left-brain right-brain dichotomy that was in vogue in popular psychology some 30 years ago. Here’s what Wolf, very much a leading expert in this field, has to say on the latest findings (well, circa 2010):
In ongoing studies of the neural of typical reading, the research group at Georgetown University [a private research university in Washington DC] found that over time there is ‘progressive disengagement’ of the right hemisphere’s larger visual recognition system in reading words, and an increasing engagement of left hemisphere’s frontal, temporal, and occipital-temporal regions. This supports Orton’s belief that during development the left hemisphere takes over the processing of words (Wolf, p185).
Canto: Yes, that’s ‘typical reading’. Children with dyslexia ‘used more frontal regions, and also showed much less activity in left posterior regions, particularly in the developmentally important left-hemisphere angular gyrus’. Basically, they used ‘auxiliary’ right-hemisphere regions to compensate for these apparently insufficiently functional left regions. It seems that they are using ‘memory’ strategies (from right-hemisphere structures) rather than analytic ones, and this causes highly predictable delays in processing.
Jacinta: A number of brain regions are named in this explanation/exploration of the problems/solutions for dyslexic learners, and these names mean very little to us, so let’s provide some – very basic – descriptions of their known functions, and their positions in the brain.
Canto: Right (or left):
The angular gyrus – which, like all other regions, is worth looking up on google images as to placement – is in a sense divided in two by the corpus callosum. Described as ‘horseshoe-shaped’, it’s in the parietal lobe, or more specifically ‘the posterior region of the inferior parietal lobe’. The parietal lobes are paired regions at the top and back of the brain, the superior sitting atop the inferior. The angular gyrus is the essential region for reading and writing, so it comes first.
The occipital-temporal zone presumably implies a combo of the occipital and temporal lobes. The occipital is the smallest of the four lobes (occipital, temporal, parietal, frontal), each of which is ‘sided’, left and right. The junction of these two lobes with the parietal (TPO junction) is heavily involved in language processing as well as many other high-order functions.
Jacinta: Okay, that’ll do. It’s those delays you mention, the inability to attain automaticity, which characterises the dyslexic, and it appears to be caused by the use of a different brain circuitry, circuitry of the right-hemisphere. Best to quote Wolf again:
The dyslexic brain consistently employs more right-hemisphere structures than left-hemisphere structures, beginning with visual association areas and the occipital-temporal zone, extending through the right angular gyrus, supramarginal gyrus, and temporal regions. There is bilateral use of pivotal frontal regions, but this frontal activation is delayed (Wolf, p186).
Canto: The supramarginal gyrus is located just in front of and connected to the angular gyrus (a gyrus is anatomically defined as ‘a ridge or fold between two clefts on the cerebral surface in the brain). These two gyri, as mentioned above, make up the inferior parietal lobe.
Jacinta: Wolf describes cumulative research from many parts of the world which tends towards a distinctive pattern in dyslexia, but also urges skepticism – the human brain’s complexity is almost too much for a mere human brain to comprehend. No two brains are precisely alike, and there’s unlikely to be a one-size-fits all cause or treatment, but explorations of this deficit are of course leading to a more detailed understanding of the brain’s processes involving particular types of object recognition, in visual and auditory terms.
Canto: It’s certainly a tantalising field, and we’ve barely touched on the surface, and we’ve certainly not covered any, or very much of the latest research. One of the obvious questions is why some brains resort to different pathways from the majority, and whether there are upsides to offset the downsides. Is there some clue in the achievements of people known or suspected to be have been dyslexic in the past? I feel rather jealous of those researchers who are trying to solve these riddles….
References
Maryanne Wolf, Proust and the squid: the story and science of the reading brain, 2010
https://www.kenhub.com/en/library/anatomy/angular-gyrus
https://academic.oup.com/brain/article/126/9/2093/367492
dyslexia is not one thing 3: problems with automaticity
Q Canto: So the next hypothesised basic source of dyslexia is ‘a failure to achieve automaticity’, that’s to say the sort of rapid, more or less unconscious processing of sounds into letters and vice versa, which probably means effective connection between brain regions or structures.
Jacinta: Perhaps because one of the structures is somehow internally dysfunctional.
Canto: wYes, and it often begins with vision. Researchers have found that many dyslexic individuals couldn’t separate two rapidly succeeding visual flickers as clearly as other individuals – an apparent processing problem. Similar research with dyslexic children found that, though they could identify stimuli initially as well as the non-dyslexic, they fell behind with added complexity and speed. This occurred more or less equally whether the stimuli were aural or visual. The connections just didn’t come ‘naturally’ to them.
Jacinta: So what about the connection between language – I mean speech, which is tens of thousands of years old – and reading and writing, a much newer development for our brains to deal with? Do dyslexic people have problems with processing good old speech? Are they slower to learn to talk?
Canto: Yes, a good question. Wolf describes research in which children with dyslexia in a number of languages, including English, ‘were less sensitive to the rhythm in natural speech, which is partly determined by how the sounds in words change through stress and ‘beat patterns’’ (Wolf, p177). Others have found breakdowns in processing in various motor tasks involving hearing and seeing. That’s to say, in the automaticity of such tasks. One psychologist who studies dyslexic children found an extensive range of problems with processing speed, especially a time gap or asynchrony between visual and auditory processing, and this observation has become commonplace.
Jacinta: But does this relate specifically to learning to speak? I’ve heard that Einstein was slow at that as a child.
Canto: Yes it’s said that he didn’t learn to speak full sentences before the age of five. But here we’re just talking about ‘naming speed’, and how it appears to use the same neurological structures as reading, as problems with one is predictive of problems with the other.
Jacinta: And the problem isn’t so much with naming per se, but the speed, the gap.
Canto: Yes, the lack of automaticity. Neurologists working in this field have developed ‘rapid automised naming’ (RAN) tasks which have become the most effective predictors of reading performance, regardless of language. Wolf herself has developed a refinement, rapid alternating stimulus (RAS), which, as the name suggests, gives more weight to attention-switching automaticity. Here’s an interesting quote from Wolf:
If you consider that the whole development of reading is directed toward the ability to decode so rapidly that the brain has time to think about incoming information, you will understand the deep significance of those naming speed findings. In many cases of dyslexia, the brain never reaches the highest stages of reading development, because it takes too long to connect the earliest parts of the process. Many children with dyslexia literally do not have time to think in the medium of print.
Jacinta: It makes me think of the unconscious, but not the Freudian one. A processing that you don’t have to think about. So that you can think about the info, not the form that encapsulates it.
Canto: Yes, and none of this explains why some have these problems with automaticity – which brings us back to neurology. Are dyslexic individuals using a different circuit from the rest of us, and does this explain their skills and abilities in other areas? Remember the names – Einstein, da Vinci, Gaudi, Picasso… not that dyslexia guarantees genius or anything…
Jacinta: Yes, far from it, I’d say, but it’s a fascinating conundrum.
Canto: So, neurology. And this takes us to how the ‘reading brain’, a very new phenomenon, evolutionarily speaking, came into being. fMRI images appear to confirm hypotheses that the brain ‘uses older object recognition pathways in the occipital-temporal zone (area 37) to name both letters and objects’ (Wolf, p179). It’s a process described as ‘neuronal recycling’. And it takes us to brain regions associated with particular tasks. For example, the left occipital-temporal area is apparently more associated with object naming, a much older task, evolutionarily speaking, than letter naming, and one that takes up more cortical space. The more streamlined, specialised use of this region for letters, and the development of automaticity for that purpose, is a prime example of our much-vaunted neuroplasticity.
Jacinta: What they’ve called RAN is always faster for letters than objects – that’s perhaps because letters are a small, even quite tiny subset of the near-infinite set of objects.
Canto: Yes, and here I’m going to quote a difficult passage by Wolf at some length, and then try, with your help, to make sense of it:
…culturally invented letters elicit more activation than objects in each of the other ‘older structures’ (especially temporal-parietal language areas) used for reading in the universal reading brain. This is why measures of naming speed like RAN and RAS predict reading across all known languages. It is also why, side-by-side, the brain images of the object- and letter-naming tasks are like comparative evolutionary photos of a pre-reading and post reading brain (Wolf, p181).
Jacinta: So this is a bit confusing. Culturally invented letters are new, evolutionarily speaking. And there are older language structures used for reading. Repurposed? Added onto? A bit of renovation? And what exactly is ‘the universal reading brain’?
Canto: Good question, and a quick internet research reveals much talk of a ‘universal reading network’. Here’s a fascinating abstract from a 2020 study, some ten years after the publication of Wolf’s book. It’s entitled “A universal reading network and its modulation by writing system and reading ability in French and Chinese children”:
Are the brain mechanisms of reading acquisition similar across writing systems? And do similar brain anomalies underlie reading difficulties in alphabetic and ideographic reading systems? In a cross-cultural paradigm, we measured the fMRI responses to words, faces, and houses in 96 Chinese and French 10-year-old children, half of whom were struggling with reading. We observed a reading circuit which was strikingly similar across languages and consisting of the left fusiform gyrus, superior temporal gyrus/sulcus, precentral and middle frontal gyri. Activations in some of these areas were modulated either by language or by reading ability, but without interaction between those factors. In various regions previously associated with dyslexia, reading difficulty affected activation similarly in Chinese and French readers, including the middle frontal gyrus, a region previously described as specifically altered in Chinese. Our analyses reveal a large degree of cross-cultural invariance in the neural correlates of reading acquisition and reading impairment.
So this research, like no doubt previous research, identifies various brain regions associated with reading ability and impairment, and finds that the same automacity, or lack thereof, is associated with the same regions, such as the middle frontal gyrus, in both alphabetic and ideographic reading systems. I think this is further confirmation of the research work Wolf is citing. Of course, I don’t know much about these brain regions. A course in neurology is required.
Jacinta: But what Wolf appears to be saying in that earlier quote is that you can get brain images (via fMRI) of object naming (older brain) tasks and put them side by side with images of letter naming tasks (younger brain), and it’s like seeing the results of evolution. Sounds a bit much to me. I suppose you can see a different pattern. Isn’t fMRI based on the magnetism of iron in the blood?
Canto: Yes yes. This is complex, but of course it’s true that the neural networking required for reading and writing is much more recent than that for language – and remember that of the 7000 or so languages we know of, only about 300 have a written form, which suggests that the Aborigines, before whities arrived, and the Papua-New Guineans, who have about 700 different languages on their island, were unable to even be dyslexic, or were all dyslexic without knowing it, or giving a flying fuck about it, because they had no writing, and no wiring for reading it.
Jacinta: So it would be interesting, then, to scan the brains of those language users – and there are no humans who aren’t language users – who don’t have writing. Take for example the Australian Aborigines, who became swamped by white Christian missionaries determined to ‘civilise’ them, more or less overnight in evolutionary terms, through teaching them to read and write. And then would’ve been characterised as backward for not picking up those skills.
Canto: That’s an interesting point, but it’s the same even in ‘cradles of civilisation’ such as Britain, where the vast majority were illiterate, and encouraged to be so, 500 years ago. At that time the printing press was a new-fangled device, church services were mostly conducted in Latin, and it was convenient to keep the peasantry in ignorance and in line. And yet, when it became more convenient to have a literate population, the change appears to have been relatively seamless, dyslexia notwithstanding. So it seems that, from a neurological perspective, little change was required.
Jacinta: Yes, that’s a good point, and it points to brain plasticity. Curiouser and curiouser – so it’s not so much about evolution and genes, but relatively rapid neural developments…. to be continued…
References
M Wolf, Proust and the squid, 2010
A bit about schizophrenia – a very bizarre ailment
Having, for a book group, read a strange novel written a little over 50 years ago, by Doris Lessing, Briefing for a descent into hell, the title of which may or may not be ironic, and being reasonably interested in the brain, its functions and dysfunctions, I’ve decided to use this post to update my tiny knowledge of schizophrenia, a disorder I’ve had some acquaintance with.
Lessing’s book may or may not be about schizophrenia, because it doesn’t concern itself with labeling any mental disorders, or with the science of brain dysfunction in any way. The focus is upon the imaginative world of an Oxbridge academic, a lecturer in classical mythology or some such, who, having been found wandering about in some Egdon Heath-type landscape, with no identification papers or money, and a lack of proper lucidity, is brought into a psychiatric facility for observation and treatment. The vast bulk of the book is told from this individuals’s perspective. Not that he tells the story of his illness, he simply tells stories – or Lessing tells stories on his behalf. Somehow the reader is allowed to to enter the main character’s inner landscape, which includes a voyage around the Pacific Ocean, another voyage around the solar system (conducted by classical deities) and harrowing, but fake, war-time experiences in the Balkans. Along the way we’re provided with the occasional dazzling piece of insight which I think we’re asked to consider as the upside, or mind-expanding nature, of ‘madness’ – somewhat in the spirit of Huxley’s Doors of Perception and Timothy Leary’s psychedelia. At the end of the book the professor is returned to ‘normality’ via electric shock treatment, and becomes, apparently, as uninteresting a character as most of the others in the book, especially the doctors responsible for his treatment, only known as X and Y.
So, there are problems here. First, Lessing’s apparent lack of interest in the science of the brain means that we’re at a loss to know what the academic is suffering from. Madness and insanity are not of course, legitimate terms for mental conditions, and Lessing avoids using them, but offers nothing more specific, so we’re reduced to trying to deduce the condition from what we know of the behaviour and ramblings of an entirely fictional character. I’ve come up with only two not very convincing possibilities – schizophrenia and brain tumour. A brain tumour is a useful literary device due to the multifaceted nature of our white and grey matter, which constitutes the most complex organ in the known universe, as many an expert has pointed out. A benign tumour – one that that doesn’t metastasise – may bring on a multiplicity of neurons or connections between them that increase the ability to confabulate – though I’ve never heard of such an outcome and it’s more likely that our ‘imagination’ is the product of multiple regions spread throughout the cortex. Schizophrenia only really occurs to me here because the professor was found wandering ‘lonely as a cloud’, far from home, having had his wallet presumably stolen, so that it took some time to identify him. This reminds me of a friend who has from this condition, and has suffered a similar experience more than once.
One of the symptoms of schizophrenia is called ‘loss of affect’, which means that the sufferer become relatively indifferent to the basics – food, clothing and shelter – so caught up is he in his mental ramblings, which he often voices aloud. It’s rare however, for schizophrenia to make its first appearance in middle-age, as appears to be the case here. Another reason, though, that my thoughts turned to schizophrenia was something I read online, in reference to Briefing for a descent into hell. I haven’t read any reviews of the book, and in fact I had no idea when the book was published, as I’d obtained a cheapie online version, which was undated. So in trying to ascertain the date – 1971, earlier than I’d expected, but in many ways illuminating – I happened to note a brief reference to a review written when the book came out, by the US essayist Joan Didion. She wrote that the book presented an ‘unconvincing description of mental illness’ and that the book displayed the influence of R D Laing. A double bullseye in my opinion.
I read a bit of R D Laing, the noted ‘anti-psychiatrist’ in the seventies, after which he went decidedly out of fashion. His focus was primarily on schizophrenia – as for example in his 1964 paper ‘Is schizophrenia a disease?’ – though he treated other psychoses in much the same way as ‘a perfectly rational response to an insane world’. This is doubtless an oversimplification of his views, but in any case he seems to have given scant regard to what is actually going on in the brain of schizophrenics.
Since the sixties and seventies, though, and especially since the nineties and the advent of PET scanning, MEG, fMRI and other technologies, the field of neurology has advanced exponentially, and the mental ailments we suffer from are being pinpointed a little more accurately vis-à-vis brain regions and processes. I’ve noted, though, that there’s still a certain romantic halo around the concept of ‘madness’, which after all human society has been ambivalent about since the beginning. The wise fool, the mad scientist and the like have long had their appeal, and it may even be that in extremis, insanity may be a ‘reasonable’ option. As for schizophrenia, maybe we can live with our ‘demons’, as was apparently the case for John Nash after years of struggle, but it’s surely worth trying to get to the bottom of this often crippling disorder, so that it can be managed or cured without resort to disabling or otherwise unhealthy or inconvenient dependence on medication.
Schizophrenia is certainly weird, and its causes are essentially unknown. There’s a genetic element – you’re more likely to suffer from it if it runs in the family – but it can also be brought on by stress and/or regular drug use, depending no doubt on the drug. It’s currently described as affecting a whopping one in a hundred people (with enormous regional variation, apparently), but perhaps if we’re able to learn more about the variety of symptoms we might be able to break it down into a group of affiliated disorders. There is no known cure as yet.
One feature of the ‘neurological revolution’ of the last few decades has been the focus on neurotransmission and electrochemical pathways in the brain, and dopamine, a neurotransmitter, was an early target for understanding and treating the disorder (and may others). And that’s still ongoing:
Current research suggests that schizophrenia is a neurodevelopmental disorder with an important dopamine component.
That’s from a very recent popular website, but research is of course growing, and pointing at other markers. A reading of the extensive Wikipedia article on schizophrenia has a near-paralysing effect on any attempt to define or describe it in a blog post like this. Glutamate, the brain’s ‘most abundant excitatory neurotransmitter’, has been a major recent focus, but it’s unlikely that we’ll get to the bottom of schizophrenia by examining brains in isolation from the lived experience of their owners. Genetics, epigenetics, stress, living conditions and associated disorders, inter alia, all appear to play a part. And due to its strangeness, its apparent hallucinatory nature, its modern associations of alienation and dystopia – think King Crimson’s ’21st century schizoid man’ and much of the oeuvre of Bowie (mostly his best work) – it’s hardly surprising that we feel something of an urge to venerate the schizoid personality, or at least to legitimate it.
Meanwhile, research will inevitably continue, as will the breaking down of intelligence and consciousness into neurotransmission pathways, hormone production, feedback loops, astrocytes etc etc, and ways of enhancing, re-routing, dampening and off-on switching neural signals via increasingly sophisticated and targeted medications… because a certain level of normality is optimal after all.
Meanwhile, I’m off to listen to some of that crazy music….
References
https://www.verywellmind.com/the-relationship-between-schizophrenia-and-dopamine-5219904
https://www.verywellmind.com/what-is-dopamine-5185621
still bitten by the bonobo bug…
Having written quite a few essays on a future bonoboesque world, I’ve found myself in possession of a whole book on our Pan paniscus relatives for the first time. All that I’ve gleaned about these fellow apes until now has been from the vasty depths of the internet, a gift that will doubtless keep on giving. My benefactor apologised for her gift to me, describing it as a coffee-table book, perhaps more pictorial than informative, but I’ve already learned much that’s new to me from the first few pages. For example, I knew from my basic research that bonobos were first identified as a distinct species in the late 1920s or early 1930s – I could never get the date straight, perhaps because I’d read conflicting accounts. De Waal presents a more comprehensive and interesting story, which involves, among other things, an ape called Mafuka, the most popular resident, or inmate, of Amsterdam Zoo between 2011 and 2016, later identified as a bonobo. The zoo now features a statue of Mafuka.
More important, though, for me, is that everything I’ve read so far reminds me of the purpose of my bonobo essays, but also makes me wonder if I haven’t focussed enough on one central feature of bonobo society, probably out of timidity. Here’s how De Waal puts it:
It is impossible to understand the social life of this ape without attention to its sex life: the two are inseparable. Whereas in most other species, sexual behaviour is a fairly distinct category, in the bonobo it has become an integral of social relationships, and not just between males and females. Bonobos engage in sex in virtually every partner combination: male-male, male-female, female-female, male-juvenile, female-juvenile, and so on. The frequency of sexual contact is also higher than among most other primates.
In our own society, definitely still male-dominated but also with a legacy of religious sexual conservatism, this kind of all-in, semi-masturbatory sexual contact is absolutely beyond the pale. I’m reminded of the Freudian concept of sublimation I learned about as a teen – the eros or sex drive is channelled into other passionate, creative activities, and, voila, human civilisation! And yet, we’re still obsessed with sex, which we’re expected to transmute into sexual fulfilment with a lifelong partner. Meanwhile, the popularity of porn, or what I prefer to call the sex video industry, as well as the world’s oldest profession, indicates that there’s much that’s not quite right about our sex lives.
This raises questions about monogamy, the nuclear family, and even the human concept of love. This is ancient, but nevertheless dangerous territory, so for now I’ll stick with bonobos. As with chimps, female bonobos often, though not always, move to other groups at sexual maturity, a practice known as philopatry. Interestingly, this practice has similarities to exogamous marriage practices, for example among some Australian Aboriginal groups. It’s interesting, then, that female-female bonds tend to be the strongest among bonobos, considering that there’s no kinship involved.
Needless to say, bonobos don’t live in nuclear families, and child-care is a more flexible arrangement than amongst humans, though the mother is naturally the principal carer. And it seems that bonobo mothers have a subtly closer relationship with their sons than their daughters:
the bond between mother and son is of particular significance in bonobo society where the son will maintain his connection with his mother for life and depend upon her for his social standing within the group. For example, the son of the society’s dominant female, the strong matriarch who maintains social order, will rise in the ranks of the group, presumably to ensure the establishment and perpetuation of unaggressive, non-competitive, cooperative male characteristics, both learnt and genetic, within the group.
Considering this point, it would be interesting to research mother-son relations among human single-parent families in the WEIRD world, a situation that has become more common in recent decades. Could it be that, given other support networks, rather than the disadvantages often associated with one-parent families in human societies, males from such backgrounds are of the type that command more respect than other males? Particularly, I would suspect, from females. Of course, it’s hard to generalise about human upbringing, but we might be able to derive lessons from bonobo methods. Bonobo mothers rarely behave punitively towards their sons, and those sons remain attached to their mothers throughout their lives. The sons of high-status females also attain high status within the male hierarchy.
Yet we are far from being able to emulate bonobo matriarchy, as we’re still a very patriarchal society. Research indicates that many women are still attracted to high-status, philandering men. That’s to say, they’ve been ‘trained’ to climb the success ladder through marriage or co-habitation than through personal achievement. They’ve also been trained into the idea of high-status males as dominating other males as well as females. It is of course changing, though too slowly, and with too many backward moves for the more impatient among us. Two macho thugocracies, Russia and China, are currently threatening the movement towards collaboration and inclusivity that we see in female-led democracies such as Taiwan, New Zealand and a number of Scandinavian countries. It may well be that in the aftermath of the massive destruction wrought by these thugocracies, there will come a reckoning, as occurred after the two ‘world wars’ with the creation of the UN and the growth of the human rights movement and international aid organisations, but it is frustrating to contemplate the suffering endured in the meantime, by those unlucky enough to be born in the wrong place at the wrong time.
Now of course all this might be seen as presenting a romanticised picture of bonobos (not to mention female humans), which De Waal and other experts warn us against. The difference in aggression between bonobos and chimps is more a matter of degree than of type, perhaps, and these differences can vary with habitat and the availability of resources. And yet we know from our studies of human societies that male-dominated societies are more violent. And male domination has nothing to do with simple numbers, it is rather about how a society is structured, and how that structure is reinforced. For example I’ve written recently about how the decidedly male god of the Abrahamic religions, originally written as YWH or Elohim, emerged from a patriarchal, polygamous society in the Sinai region, with its stories of Jacob and Abraham and their many wives, which was reinforced in its structure by origin myths in which woman was created out of a man’s rib and was principally responsible for the banishment from paradise. The WEIRD world is struggling to disentangle itself from these myths and attitudes, and modern science is its best tool for doing so.
One of the most interesting findings, then, from modern neurology, is that while there are no categorical differences between the male and the female brain in humans, there are significant statistical differences – which might make for a difference in human society as a whole. To explain further: no categorical difference means that, if you were a professional neurologist who had been studying the human brain for decades, and were presented with a completely disembodied but still functional human brain to analyse, you wouldn’t be able to assert categorically that this brain belonged to a male or a female. That’s because the differences among female brains, and among male brains, are substantial – a good reason for promoting gender fluidity. However, statistically, there are also substantial differences between male and female brains, with males having more ‘grey’ material (the neurons) and females having more ‘white’ material (the myelinated connections between neurons), and with males having slightly higher brain volume, in accord with general sexual dimorphism. In a 2017 British study involving some 5,000 subjects, researchers found that:
Adjusting for age, on average… women tended to have significantly thicker cortices than men. Thicker cortices have been associated with higher scores on a variety of cognitive and general intelligence tests.
This sounds promising, but it’s doubtful that anything too insightful can be made of it, any more than a study of bonobo neurophysiology would provide us with insights into their culture. But, you never know…
References
Frans De Waal & Frans Lanting, Bonobo: the forgotten ape, 1997.
https://www.humancondition.com/freedom-the-importance-of-nurturing-in-bonobo-society/
on the origin of the god called God, part 2: the first writings, the curse on women, the jealous god
reading matters 9
New Scientist – the collection: mysteries of the human brain. 2019
- content hints – history of neurology, Galen, Hippocrates, Descartes, Galvani, Thomas Willis, Emil Du Bois-Reymond, Santiago Ramon y Cajal, connectionism, plasticity, mind-maps, forebrain, midbrain, hindbrain, frontal, parietal and occipital lobes, basal ganglia, thalamus, hypothalamus, amygdala, hippocampus, cerebral cortex, substantia nigra, pons, cerebellum, medulla oblongata, connectome, action potentials, axons and dendritic spines, neurotransmitters, axon terminals, signalling, ion channels and receptors, deep brain stimulation, transcranial direct current stimulation, hyper-connected hubs, 170,000 kilometres of nerve fibres, trains of thought, unbidden thoughts, memory and imagination, the sleeping brain, unconscious activity, the role of dreams, brainwaves during sleep, sleep cycles, traumatic stress disorder, Parkinsons, ADHD, dementia, depression, epilepsy, anaesthesia, attention, working memory, first memories, rationality, consciousness, von Economo neurons, the sense of self…
the male and female brain, revisited
Culture does not make people. People make culture. If it is true that the full humanity of women is not our culture, then we can and must make it our culture.
Chimamanda Ngozi Adichie

An article, ‘Do women and men have different brains?’, from Mysteries of the human brain, in the New Scientist ‘Collection’ series, has persuaded me to return to this issue – or perhaps non-issue. It convincingly argues, to me, that it’s largely a non-issue, and largely due to the problem of framing.
The above-mentioned article doesn’t go much into the neurology that I described in my piece written nearly 7 years ago, but it raises points that I largely neglected. For example, in noting differences in the amygdalae, and between white and grey matter, I failed to significantly emphasise that these were averages. The differences among women in these and other statistics is greater than the differences between women and men. Perhaps more importantly, we need to question, in these studies, who the female and male subjects were. Were they randomly selected, and what does that mean? What lives did they lead? We know more now about the plasticity of the brain, and it’s likely that our neurological activity and wiring has much more to do with our focus, and what we’ve been taught or encouraged to focus on from our earliest years, than our gender.
And this takes me back to framing. Studies designed to ‘seek out’ differences between male and female brains are in an important sense compromised from the start, as they tend to rule out the differences among men and among women due to a host of other variables. They also lead researchers to make too much of what might be quite minor statistical differences. To quote from the New Scientist article, written by Gina Rippon, author of the somewhat controversial book The gendered brain:
Revisiting the evidence suggests that women and men are more similar than they are different. In 2015, a review of more than 20,000 studies into behavioural differences, comprising data from over 12 million people, found that, overall, the differences between men and women on a wide range of characteristics such as impulsivity, cooperativeness and emotionality were vanishingly small.
What all the research seems more and more to be pointing to is that there’s no such thing as a male or a female brain, and that our brains are much more what we make of them than previously thought. Stereotyping, as the article points out, has led to ‘stereotype threat’ – the fact that we tend to conform to stereotypes if that’s what’s expected of us. And all this fuels my long-standing annoyance at the stereotyped advertising and sales directed at each gender, but especially girls and women, which, as some feminists have pointed out, has paradoxically become more crass and extreme since the advent of second-wave feminism.
And yet – there are ways of looking at ‘natural’ differences between males and females that might be enlightening. That is, are there informative neurological differences between male and female rats? Male and female wolves? Are there any such differences between male and female bonobos, and male and female chimps, that can inform us about why our two closest living relatives are so socially and behaviourally different from each other? These sorts of studies might help to isolate ‘real’, biological differences in the brains of male and female humans, as distinct from differences due to social and cultural stereotyping and reinforcement. Then again, biology is surely not destiny these days.
Not destiny, but not entirely to be discounted. In the same New Scientist collection there’s another article, ‘The real baby brain’, which looks at a so-called condition known as ‘mummy brain’ or ‘baby brain’, a supposed mild cognitive impairment due to pregnancy. I know of at least one woman who’s sure this is real (I don’t know many people), but up until recently it has been little more than an untested meme. There is, apparently, a slight, temporary shrinkage in the brain of a woman during pregnancy, but this hasn’t been found to correlate with any behavioural changes, and some think it has to do with streamlining. In fact, as one researcher, Craig Kinsley, explained, his skepticism about the claim was raised in watching his partner handling the many new tasks of motherhood with great efficiency while still maintaining a working life. So Kinsley and his team looked at rat behaviour to see what they could find:
In his years of studying the neurobiology underlying social behaviours in rats, his animals had never shown any evidence of baby brain. Quite the opposite, actually. Although rats in the final phase of their pregnancy show a slight dip in spacial ability, after their pups are born they surpass non-mothers at remembering the location of food in complex mazes. Mother rats are also much faster at catching prey. In one study in Kinsley’s lab the non-mothers took nearly 270 seconds on average to hunt down a cricket hidden in an enclosure, whereas the mothers took just over 50 seconds.
It’s true that human mothers don’t have to negotiate physical mazes or find tasty crickets (rat mothers, unlike humans, are solely responsible for raising offspring), but it’s also clear that they, like all mammalian mothers, have to be more alert than usual to any signs and dangers when they have someone very precious and fragile to nurture and attend to. In rats, this shows up in neurological and hormonal changes – lower levels of stress hormones in the blood, and less activity in brain regions such as the amygdalae, which regulate fear and anxiety. Other hormones, such as oestradiol and oxytocin, soar to multiple times more than normal levels, priming rapid responses to sensory stimuli from offspring. Many more connections between neurons are forged in late pregnancy and its immediate aftermath.
Okay, but we’re not rats – nothing like. But how about monkeys? Owl monkeys, like most humans, share the responsibilities of child-rearing, but research has found that mothers are better at finding and gaining access to stores of food than non-mothers. Different behaviours will be reflected in different neural connections.
So, while it’s certainly worth exploring how the female brain functions during an experience unique to females, most of the time women and men engage in the same activities – working, playing, studying, socialising and so forth. Our brain processes will reflect the particular patterns of our lives, often determined at an early age, as the famous Dunedin longitudinal study has shown. Gender, and how gender is treated in the culture in which we’re embedded, is just one of many factors that will affect those processes.
References
New Scientist – The Collection, Mysteries of the human brain, 2019
https://en.wikipedia.org/wiki/Dunedin_Multidisciplinary_Health_and_Development_Study
why do our pupils dilate when we’re thinking hard?

Canto: So we’re reading Daniel Kahneman’s Thinking fast and slow, among other things, at the moment, and every page has stuff worth writing about and exploring further, it’s impossible to keep up.
Jacinta: Yes with this stuff it’s a case of reading slow and slower. Or writing about it faster and faster, unlikely in our case. A lot of it might be common knowledge, but not to us, though in these first fifty pages or so he’s getting into embodied cognition, which we’ve written about, but there’s new data here that I didn’t know about but which makes a lot of sense to me.
Canto: That’s because you’ve been primed to accept this stuff haha. But I want to focus here more narrowly on experiments Kahneman did early in his career with Jackson Beatty, who went on to become the leading figure in the study of ‘cognitive pupillometry’.
Jacinta: Presumably measuring pupils, which is easy enough, while measuring cognition or cognitive processes, no doubt a deal harder.
Canto: Kahneman tells the story of an article he read in Scientific American – a mag I regularly read in the eighties, so I felt all nostalgic reading this.
Jacinta: Why’d you stop reading it?
Canto: I don’t know – I had a hiatus, then I started reading New Scientist and Cosmos. I should get back to Scientific American. All three. Anyway, the article was by Eckhard Hess, whose wife noticed that his pupils dilated when he looked at lovely nature pictures. He started looking into the matter, and found that people are judged to be more attractive when their pupils are wider and that belladonna, which is used in cosmetics, also dilates the pupils. More importantly for Kahneman, he noted ‘the pupils are sensitive indicators of mental effort’. Kahneman was looking for a research project at the time, so he recruited Beatty to help him with some experiments.
Jacinta: And the result was that our pupils dilate very reliably, and quite significantly, when we’re faced with tough problem-solving tasks, like multiplying double-digit numbers – and they constrict again on completion, so reliably that the monitoring researcher can surprise the subject by saying ‘so you’ve got the answer now?’
Canto: Yes, the subjects were arranged so the researchers could view their eyes magnified on a screen. And of course this kind of research is easy enough to replicate, and has been. My question, though, is why does the pupil dilate in response to such an internal process as concentration? We think of pupils widening to let more light in at times of dim light, that makes intuitive sense, but – in order to seek a kind of metaphorical enlightenment? That’s fascinating.
Jacinta: Well I think you’re hitting on something there. Think of attention rather than concentration. I suspect that our pupils widen when we attend to something important or interesting. As Eckhard Hess’s wife noticed when he was looking at a beautiful scene. In the case of a mathematical or logical problem we’re attending to something intently as well, and the fact that it’s internal rather than external is not so essential. We’re looking at the problem, seeing the problem as we try to solve it.
Canto: Yes but again that’s a kind of metaphorical seeing, whereas your pupils don’t dilate metaphorically.
Jacinta: Yes but it’s likely that our pupils dilate in the dark only when we’re trying to see in the dark. Making that effort. When we turn off the light at night in our bedroom before going to sleep, it’s likely that our pupils don’t dilate, because we’re not trying to see the familiar objects around us, we just want to fall asleep. So even if we leave our eyes open for a brief period, they’re not actually trying to look at anything. It’s like when you enter a classroom and see a maths problem on the board. Your eyes won’t dilate just on noticing the problem, but only when you try to solve it.
Canto: I presume there’s been research on this – like with everything we ever think of. What I’ve found is that the ‘pupillary light reflex’ is described as part of the autonomous nervous system – an involuntary system, largely, which responds ‘autonomously’, unconsciously, to the amount of light it receives. But as you say, there are probably other over-riding features, coming from the brain rather than outside. However, a pupil ‘at rest’, in a darkened room, is usually much dilated. So dilation is by no means always to do with attention or focus.
Jacinta: Well there’s a distinction made in neurology between bottom-up and top-down processing, which you’ve just alluded to, in the sense that information coming from outside, and sensed on the skin, the eye and other sensory organs, is sent ‘up’ to the brain – the Higher Authority, – which then sends down responses, in this case to dilate or contract the pupil, all that is called bottom-up processing. But researchers have found that the pupil isn’t just regulated in a bottom-up way.
Canto: And that’s where cognitive pupillometry comes in.
Jacinta: And here are some interesting research findings regarding top-down influences on pupil size. When subjects were primed with pictures relating to the sun, even if they were’nt bright, their pupils contracted more than with pictures of the moon, even if those pictures were actually brighter than the sun pictures. And even words connected to brightness made their pupils contract. There’s also been solid research to back up the speculations of Eckhard Hess, that emotional scenes, images and memories, whether positive or negative, have a dilating effect on our pupils. For example, hearing the cute sound of a baby laughing, and the disturbing sound of a baby screaming, widens our pupils, while more neutral sounds of road traffic or workplace hubub have very little effect.
Canto: Because there’s nothing, or maybe too much info, to focus our attention, surely? While the foregrounded baby’s noises stimulate our sense of wonder, of ‘what’s happening?’ We’re moved to attend to it. Actually this reminds me of something apparently unrelated but maybe not. That’s the well-known problem that we’re moved to give to a charity when one suffering child is presented in an advertisement, and less and less as we’re faced with a greater and greater number of starving children. These numbers become like distant traffic, they disperse our attention and interest.
Jacinta: Yes well that’s a whole other story, but this brings us to the most interesting of findings re top-down effects on our pupils, and the question we’ve asked in the title. A more scientific name for thinking hard is increased cognitive load, and countless experiments have shown that increasing cognitive load, for example by solving tough maths problems, or committing stacks of info to memory, correlates with increased pupillary dilation. This hard thinking is done in the prefrontal cortex, but we won’t go into detail here about its more or less contested compartments. What I will say is there’s an obvious difference between thinking and memorising, and both of these activities increase cognitive load, and pupillary dilation. Some very interesting studies relating memorising and pupillary dilation have shown that children under a certain age, unsurprisingly, are less able to hold info in short-term memory than adults. The research task was to memorise a long sequence of numbers. Monitoring of pupil response showed that the children’s pupils would constrict from their dilated state after six numbers, unlike those of adults.
Canto: So, while we may not have a definitive answer to our title question – the why question – it seems to be that cognitive load, like any load that we carry, requires the expenditure of energy, which can be manifested in the tightening of muscles in the eye which dilates the pupils. This dilation reveals, apparently, that we’re attending to something or concentrating on something. I can see some real-world applications. Imagine, as a teacher, having a physics class, say. You could get your students to wear special glasses that monitor the dilation and constriction of their pupils – I’m sure such devices could be rigged up, and connected to a special console at the teacher’s desk, so he could see who in the class was paying close attention and who was off in dreamland…
Jacinta: Yeah right haha – even if that was physically possible, there are just a few privacy issues there, and how would you know if the pupillary dilation was due to the fascinating complexities of electromagnetism or the delightful profile of your student’s object of fantasy a couple of seats away? Or how could you know if their apparent concentration had anything much to do with comprehension? Or how would you know if their apparent lack of concentration was to do with disinterest or incomprehension or the fact they were way ahead of you in comprehension?
Canto: Details details. Small steps. One way of finding out all that is by asking them. At least such monitoring would give you some clues to go by. I look forward to this brave new transhumanising world….
References
Daniel Kahneman, Thinking fast and slow, 2012
https://kids.frontiersin.org/article/10.3389/frym.2019.00003
Torres A and Hout M (2019) Pupils: A Window Into the Mind. Front. Young Minds. 7:3. doi: 10.3389/frym.2019.00003
On Massimo Pigliucci on scientism 2: brains r us

In his Point of Inquiry interview, Pigliucci mentions Sam Harris’s book The Moral Landscape a couple of times. Harris seeks to make the argument, in that book, that we can establish, sometime in the future, a science of morality. That is, we can be factual about the good life and its opposite, and we can be scientific about the pathways, though there might be many, that lead towards the good life and away from the bad life. I’m in broad agreement about this, though for pragmatic reasons I would probably prefer the term ‘objective’ to ‘scientific’. Just because it doesn’t frighten the horses so much. As mentioned in my previous post, I don’t want to get hung up on terminology. Science obviously requires objectivity, but it doesn’t seem clear to everyone that morality requires objectivity too. I think that it does (as did, I presume, the authors of the Universal Declaration of Human Rights), and I think Harris argues cogently that it does, based on our well-being as a social species. But Pigliucci says this about Harris’s project:
When Sam Harris wrote his famous book The Moral Landscape, the subtitle was ‘How science can solve moral questions’ – something like that. Well that’s a startling question if you think about it because – holy crap! So I would assume that a typical reader would buy that book and imagine that now he’s going to get answers to moral questions such as whether abortion is permissible and in what circumstances, or the death penalty or something… And get them from say physics or chemistry, maybe neuroscience, since Harris has a degree in neuroscience..
Pigliucci makes some strange assumptions about the ‘typical reader’ here. Maybe I’m a long way from being a ‘typical reader’ (don’t we all want to think that?) but, to me the subtitle (which is actually ‘How science can determine human values’) suggests, again, methodology. By what methods, or by what means, can human value – that’s to say what is most valuable to human well-being – be determined. I would certainly not have expected, reading the actual sub-title, and considering the main title of the book, answers to specific moral questions. And I certainly wouldn’t expect answers to those questions to come from physics or chemistry. Pigliucci just mentions those disciplines to make Harris’s views seem more outrageous. That’s not good faith arguing. Neuroscience, however, is closer to the mark. Our brains r us, and if we want to know why a particular mammal behaves ‘badly’, or with puzzling altruism, studying the animal’s brain might be one among many places to start. And yet Pigliucci makes this statement later on re ‘scientistic’ scientists
It seems to me that the fundamental springboard for all this is a combination of hubris, the conviction that what they do is the most important thing – in the case of Sam Harris for instance, it turns out at the end of the book [The Moral Landscape] it’s not just science that gives you the answers, it’s neuroscience that gives you the answers. Well, surprise surprise, he’s a neuroscientist.
This just seems silly to me. Morality is about our thoughts and actions, which start with brain processes. Our cultural practices affect our neural processes from our birth, and even before our conception, given the cultural attitudes and behaviours of our future parents. It’s very likely that Harris completed his PhD in cognitive neuroscience because of his interest in human behaviour and its ethical consequences (Harris is of course known for his critique of religion, but there seems no doubt that his greatest concerns about religious belief are at base concerns about ethics). Yet according to Pigliucci, had Harris been a physicist he would have written a book on morality in terms of electromagnetic waves or quantum electrodynamics. And of course Pigliucci doesn’t examine Harris’s reasoning as to why he thinks science, and most particularly neuroscience and related disciplines, can determine human values. He appears to simply dismiss the whole project as hubristic and wrong-headed.
I know that I’m being a little harsh in critiquing Pigliucci based on a 20-minute interview, but there doesn’t seem any attempt, at least here, to explain why certain topics are or should be off-limits to science, except to infer that it’s obvious. Does he feel, for example, that religious belief should be off-limits to scientific analysis? If so, what do reflective non-religious people do with their puzzlement and wonder about such beliefs? And if it’s worth trying to get to the bottom of what cultural and psychological conditions bring about the neurological networking that disposes people to believe in a loving or vengeful omnipotent creator-being, it’s also worth trying to get to the bottom of other mind-sets that dispose people to behave in ways productive or counter-productive to their well-being. And the reason we’re interested isn’t just curiosity, for the point isn’t just to understand our human world, but to improve it.
Finally Pigliucci seems to confuse a lack of interest, among such people in his orbit as Neil deGrasse Tyson and Lawrence Krauss, in philosophy, especially as it pertains to science, with scientism. They’re surely two different things. It isn’t ‘scientism’ for a scientist to eschew a particular branch of philosophy any more than it is for her to eschew a different field of science from her own, though it might seem sometimes a bit narrow-minded. Of course, as a non-scientist and self-professed dilettante I’m drawn to those with a wide range of scientific and other interests, but I certainly recognise the difficulty of getting your head around quantum mechanical, legal, neurological, biochemical and other terminology (I don’t like the word ‘jargon’), when your own ‘rabbit hole’ is so fascinating and enjoyably time-consuming.
There are, of course, examples of scientists claiming too much for the explanatory power of their own disciplines, and that’s always something to watch for, but overall I think the ‘scientism’ claim is more abused than otherwise – ‘weaponised’ is the trendy term for it. And I think Pigliucci needs to be a little more skeptical of his own views about the limits of science.