an autodidact meets a dilettante…

‘Rise above yourself and grasp the world’ Archimedes – attribution

Posts Tagged ‘perception

dyslexia is not one thing 2: structural deficits

leave a comment »

the human brain- a very very rough guide

Jacinta: So we’re going to look at earlier ideas about dyslexia, before the recent revolution in neurology, if that’s not being too hyperbolic. These ideas tended to focus on known systems, before there were well-identified or detailed neural correlates. ‘Word-blindness’ was an early term for dyslexia, highlighting the visual system. This was partly based on the 19th century case of a French businessman and musician who, after a stroke, could no longer read words or musical notes or name colours. A second stroke worsened the situation considerably, eventually causing his death.

Canto: An autopsy revealed that the first stroke had damaged the left visual area and part of the corpus callosum, which connects the two hemispheres. It appears that what the man was seeing with his right hemisphere was not able to be ‘backed up’ by the left visual area, and/or connected to the left language area. The second stroke struck mainly the angular gyrus, a complex and vital integrating and processing region towards the back of the brain.

Jacinta: Yes, and before we go on, what we’re doing here is looking in more detail at the four potential sources of dyslexia set down at the end of the previous post. So in this post we’re focusing on 1. a developmental, possibly genetic, flaw in the structures underlying language or vision. 

Canto: Right, so there’ll be three more dyslexia posts after this. So this ‘Monsieur X’ case was one of ‘classic alexia’ or acquired dyslexia, and marked an important step forward in mapping regions in relation to the visual and processing aspects of language. Norman Geschwind described it as ‘disconnection syndrome’, when two brain regions essential to a function, in this case written language, are cut off from each other.

Jacinta: The auditory cortex became an important focus in the twentieth century, as researchers noted a problem with forming ‘auditory images’ – which sounds like a problem everyone would have! More specifically it means not being able to translate the images made by letters and phonemes into sounds.

Canto: Yes, so that a word like ‘come’ (which is actually quite complex – the hard ‘k’ followed by an ‘o’ which, orally, is neither the typically short nor long version, followed finally by the silent ‘e’ which has some quite strange effect on the previous vowel) would be quite a challenge. Perhaps the real surprise is that we have no trouble with it.

Jacinta: Yes, I prefer cum myself, but that’s a bit off-topic. Anyway, psycholinguistics, much derived from the work of Noam Chomsky, which came into prominence from the 1970s, tended to treat dyslexia more as specifically language-based rather than audio-visual. Taking this perspective, researchers found that ‘reading depended more on the linguistically demanding skills of phonological analysis and awareness than on sensory-based auditory perception of speech sounds’ (Wolf, p173). This was evidenced by the way impaired-reading children treated ‘visual reversal’ in letters (e.g p and q, b and d). They were able to draw the letters accurately, but had great trouble saying them (sounding them). This appears to be a spoken language problem, which carries over to writing.

Canto: Indeed, it highlighted a problem, which apparently had nothing to do with intelligence, or basic perception, but was more of a specific perception-within-language thing:

These children cannot readily delete a phoneme from the beginning or end of a word, much less from the middle, and then pronounce it; and their awareness of rhyme patterns (to decide whether two words like ‘fat’ and ’rat’ rhyme or not) develops much more slowly. More significantly, we now know that these children experience the most difficulties learning to read when they are expected to induce the rules of correspondence between letters and sounds on their own.

Phonological explanations of dyslexia have resulted in a lot of effective remedial work in recent decades, and a library of research in the field of reading deficits.

Jacinta: Yes, these are called structural hypotheses, noting deficits in awareness of phonemic structure, and phoneme-grapheme correspondences. And these deficits presumably have their home in specific neural regions and wiring. The executive processes of the frontal lobes may be at play, in terms of organised attention, the fixing of memory and the monitoring of comprehension, but also the more ‘basic’ processes of the cerebellum, involving timing and motor coordination. And co-ordination between these regions may also be an issue.

Canto: And, as Wolf points out, these structural hypotheses have sheeted home problems to so many brain regions – the frontal executive function region, the speech region close by, the central auditory region, the language and language/visual integration regions, the posterior visual cortex and the cerebellum – that it would be fair to say that ‘many of the collective hypothesised sources of dyslexia mirror the major component structures of the reading brain’ (Wolf, p176).

Jacinta: Which sounds pretty serious. Why is it happening? And why not for others…?

References

M Wolf, Proust and the squid: the story and science of the reading brain

https://www.kenhub.com/en/library/anatomy/angular-gyrus

 

Written by stewart henderson

April 16, 2023 at 4:50 pm

what is Bayesian inference?

with one comment

Canto: So as a dumb non-scientific science aficionado, I’ve come across Bayesian inference and probability a few times before, and even might have come to an understanding of it before losing it again, but I’m wanting to get my head around it, especially in terms of consciousness and how we make sense of the external world via the complex interpreting and understanding systems in our heads. My vague sense of it is that it’s a kind of open-ended system of inferring what’s happening by continually updating the ‘understanding system’ with new data. Is that anything like it?

Jacinta: Okay, we’ve been reading Anil Seth’s Being You, subtitled ‘a new science of consciousness’, which argues for consciousness, or at least perception, as ‘controlled hallucination’. Bayesian reasoning is tightly described as ‘inference to the best explanation’, so yes, we take percepts that strike us as surprising or out of the ordinary, and do work on them through memory or the widening of perspective to make them fit with previous experience – the best explanation we can make of the meaning of that percept. I think by ‘controlled hallucination’, Seth is suggesting that the impressionistic blast of data that impinges on our senses at any moment gets its ‘control’, loses its hallucinatory impact, as a result of what we call experience, the connections between this blast and previous blasts.

Canto: So that due to familiarity we stop thinking of them as blasts, though they might’ve seemed that way as new-borns. And might seem again under the influence of drugs.

Jacinta: Yes, which can scramble the regular controls. But returning to Thomas Bayes and his reasoning, Seth describes it as abductive, as opposed to the deductive reasoning of classical logic, or the inductive reasoning derived from experience (extrapolation from an apparently unending series of observations, such as the regular waxing and waning of the moon). Here’s what Seth says about abduction:

Abductive reasoning – the sort formalised by Bayesian inference – is all about finding the best explanation for a set of observations, when these observations are incomplete, uncertain or otherwise ambiguous. Like inductive reasoning, abductive reasoning can also get things wrong. In seeking the ‘best explanation’, abductive reasoning can be thought of as reasoning backward, from observed effects to their most likely causes, rather than forward, from causes to their effects – as is the case for deduction and induction.

Anil Seth, Being you, p98

Canto: Ah right, so what we experience first are effects – stuff in our heads, and we have to make the best guess about their causes – stuff in the world. Or what we believe to be in the world. So, as new-borns we see – in our heads – the faces and bodies of these people making a fuss over us, though we apparently don’t even know what faces and bodies are, let alone parents. But over time and much repetition we come to see these faces and bodies aren’t there to harm us (if we’re lucky) and, with further information over vast swathes of time, that they’re our parents, and that we’re one of the species called Homo sapiens, etc etc

Jacinta: Well it’s good that you’ve gone back to earliest childhood, because it makes a mockery, in a way, of inferring ‘the most likely cause for the observed data’, to quote Seth, as obviously infants don’t ‘think’ that way.

Canto: And neither do adults – it’s more automatic than ‘thinking’, it’s a way of understanding and surviving in their world…

Jacinta: We need to think of inference as something more basic, far more basic than an intellectual process, of course. Anyway, here’s how Seth describes it. We go from what we already know, which is termed the prior, to what we might know in the future (the posterior) by means of what we’re now learning (the likelihood). The uniting concept here is ‘knowledge’, in its different stages. The prior isn’t necessarily stable, it can be modified or overturned by new learning. You could describe the prior also as a belief. You may believe that, say Ukraine will win the current war – whatever winning means in this context – but further learning may alter that belief one way or another. We’re looking for the best posterior probability, and so, in the Ukrainian example, we’re thoroughly examining future likelihoods – media sources and expert opinions as to the current state of events and what they might lead to – as well as battling with particular tendencies to be optimistic or pessimistic.

Canto: But doesn’t Bayesian inference, or probability, have a mathematical aspect? It doesn’t seem, from what you’ve said, that there’s anything remotely quantifiable here. How can you quantify beliefs or knowledge?

Jacinta: Well, Seth is looking at quantities here only in terms of some percept, say, as being more or less likely to be of a particular thing-in-the-world, say a particular species of bird, based on experience, the likelihood of that species being spotted in that place, at that time, and so on. I know that mathematics is involved in Bayesian probability – just look it up online – but the concept of inferring to the most likely conclusion from best current and past data seems to be mathematical only in that broadest sense. And I must admit I’m more interested in Seth’s concept of consciousness than in the mathematics of probability, Bayesian or otherwise.

Canto: Ah, but I’m wondering if, since all the physicists are telling me the universe is, if not mathematical, inexplicable without mathematics, maybe the full comprehension of consciousness requires maths too?

Jacinta: Okay since our topic is Bayesian inference we might need to wade into the mathematical shallows here. So Thomas Bayes presented an alternative to what is now, and maybe then, called frequentist statistical analysis. Here’s a rough example taken from a video referenced below. A ‘frequentist GP would use basic statistics derived from a model, say ‘a certain number/percentage of my male patients above a particular age have heart problems’ to infer that the patient before her’s symptoms are quite likely the result of a heart condition. A Bayesian GP would have a similar model but would also take into account her prior knowledge of this particular patient, which would make the diagnosis more likely or unlikely depending on the content of this prior knowledge.

Canto: Yeah that’s the mathematical shallows all right.

Jacinta: Well it might surprise you how mathematical even examples like this can be made. But put another way, the Bayesian approach is experiential rather than simple statistical number-crunching. ‘Frequentist’ is given away by the title, so maybe it strives to be objective.

Canto: Quantitative vs qualitative?

Jacinta: Well, yes that’s part of it, but there is a Bayesian theorem, which I may as well stick in here for completeness’ sake.

There are different descriptions of the theorem – this one doesn’t give much indication of the importance of prior knowledge/experience. Anyway, returning to Seth and consciousness, these Bayesian inferences would be constantly updated in the case of infants as you say, as new knowledge is being produced at a rapid clip, that this animal is a dog, say, and is mostly harmless but not always, and this item isn’t food though it’s nice to suck on, but that item tastes horrible – though they wouldn’t know what taste is…

Canto: Which really explains why all these neural connections are laid down do quickly in early childhood – they’re really essential for survival.

Jacinta: And, as Seth points out, the best scientific methods involve Bayesian inference – theories upgraded or discarded by experimental evidence or new discoveries that don’t fit. But our thinking – that, when we’re infants, these people constantly around us are more significant, for us, than the people who pass by or occasionally visit, doesn’t have to rise to the level of theory. They’re just understandings, more or less accurate, and constantly updated – for example we might learn that these adults or pets aren’t always on our side, for example when we try to eat the dog, or whatever. Anyway, we could go into a little bit of detail about the probabilities, from zero to one, of priors, likelihoods and posteriors, and about probability distributions, of the Gaussian kind, which shift as more information comes to mind, but maybe we’ll come back to it in a future post. My head hurts already.

References

Anil Seth, Being you: a new science of consciousness, 2021

Bayesian vs frequentist statistics (video), Ox Educ

Frequentism and Bayesianism: What’s the Big Deal? | SciPy 2014 | Jake VanderPlas (video)

Written by stewart henderson

April 5, 2022 at 4:01 pm

inference in the development of reason, and a look at intuition

leave a comment »

various more or less feeble attempts to capture intuition 

Many years ago I spent quite a bit of time getting my head around formal logic, filling scads of paper with symbols whose meanings I’ve long since forgotten, obviously through disuse.
I recognise that logic has its uses, tied with mathematics, e.g. in developing algorithms in the field of information technology, inter alia, but I can’t honestly see its use in everyday life, at least not in my own. Yet logic is generally valued as the sine qua non of proper reasoning, as far as I can see.
Again, though, in the ever-expanding and increasingly effective field of cognitive psychology, reason and reasoning as concepts are undergoing massive and valuable re-evaluation. As Hugo Mercier and Dan Sperber argue in The enigma of reason, they have benefitted (always arguably) from being taken out of the hands of logicians and (most) philosophers and examined from an evolutionary and psychological perspective. Charles Darwin read Hume on inference and reasoning and commented in his diary that scientists should consider reason as gradually developed, that’s to say as an evolved trait. So reasoning capacities should be found in other complex social mammals to varying degrees.    

An argument has been put forward that intuition is a process that fits between inference and reason, or that it represents a kind of middle ground between unconscious inference and conscious reasoning. Daniel Kahneman, for example, has postulated three cognitive systems – perception, intuition (system 1 cognition) and reasoning (system 2). Intuition, according to this hypothesis, is the ‘fast’, experience based, rule-of-thumb type of thinking that often gets us into trouble, requiring the slower ‘think again’ evaluation (which is also far from perfect) to come to the rescue. However, Mercier and Sperber argue that intuition is a vague term, defined more by what it lacks than by any defining characteristics. It appears to be a slightly more conscious process of acting or thinking by means of a set of inferences. To use a personal example, I’ve done a lot of cooking over the years, and might reasonably describe myself as an intuitive cook – I know from experience how much of this or that spice to add, how to reduce a sauce, how to create something palatable with limited ingredients and so forth. But this isn’t the product of some kind of intuitive mechanism, rather it’s the product of a set of inferences drawn from trial-and-error experience that is more or less reliable. Mercier and Sperber describe this sense of intuitiveness as a kind of metacognition, or ‘cognition about cognition’, in which we ‘intuit’ that doing this, or thinking that, is ‘about right’, as when we feel or intuit that someone is in a bad mood, or that we left our keys in room x rather than room y. This feeling lies somewhere between consciousness and unconsciousness, and each intuition might vary considerably on that spectrum, and in terms of strength and weakness. Such intuitions are certainly different from perceptions, in that they are feelings we have about something. That is, they belong to us. Perceptions, on the other hand, are largely imposed on us by the world and by our evolved receptivity to its stimuli.

All of this is intended to take us, or maybe just me, on the path towards a greater understanding of conscious reasoning. There’s a long way to go…

References

The enigma of reason, a new theory of human understanding, by Hugo Mercier and Dan Sperber, 2017

Thinking, fast and slow, by Daniel Kahneman, 2011

Written by stewart henderson

December 4, 2019 at 10:45 pm