an autodidact meets a dilettante…

‘Rise above yourself and grasp the world’ Archimedes – attribution

Posts Tagged ‘phonology

dyslexia is not one thing 4: the left and the right

leave a comment »

 

a one-sided view (the left) of the parts of the brain involved in language and reading processing

Canto: So we’re still looking at automaticity, and it’s long been observed that dyslexic kids have trouble retrieving names of both letters and objects from age three, and then with time the problem with letters becomes more prominent. This means that there just might be a way of diagnosing dyslexia from early problems with object naming, which of course starts first.

Jacinta: And Wolf is saying that it may not be just slowness but the use of different neural pathways, which fMRI could reveal.

Canto: Well, Wolf suggests possibly the use of right-hemisphere circuitry. Anyway, here’s what she says re the future of this research:

It is my hope that future researchers will be able to image object naming before children ever learn to read, so that we can study whether the use of a particular set of structures in a circuit might be a cause or a consequence of not being able to adapt to the new task of literacy (Wolf, p181). 

So that takes us to the next section: “An impediment in the circuit connections among the structures”.

Jacinta: Connections between. And if we’re talking about the two hemispheres, the corpus callosum could’ve provided a barrier, as it does with stroke victims…

Canto: Yes, connections within the overall reading circuit, which involves different parts of the brain, can be more important for reaching automaticity than the brain regions themselves, and a lot of neuroscientists are exploring this connectivity. Apparently, according to Wolf, three forms of disconnections are being focussed on by researchers. One is an apparent disconnection ‘between frontal and posterior language regions, based on underactivity in an expansive connecting area called the insula. This important region mediates between relatively distant brain regions and is critical for automatic processing’ (Wolf, p182). Another area of disconnection involves the occipital-temporal region, also known as Brodmann area 37, which is activated by reading in all languages. Normally, strong, automatic connections are created between this posterior region and frontal regions in the left hemisphere, but dyslexic people make connections between the left occipital-temporal area and the right-hemisphere frontal areas. It also seems to be the case that in dyslexics the left angular gyrus, accessed by good beginning readers, doesn’t effectively connect with other left-hemisphere language regions during reading and the processing of phonemes.

Jacinta: And it’s not just fMRI that’s used for neuro-imaging. There’s something called magnetoencephalography (a great word for dyslexics) – or MEG – that gives an ‘approximate’ account of the regions activated during reading, and using this tool a US research group found that children with dyslexia were using a completely different reading circuitry, which helps explain the underactivity in other regions observed by other researchers.

Canto: And leads to provocative suggestions of a differently arranged brain in some people. Which takes us to the last of the four principles: ‘a different circuit for reading’. In this section, Wolf begins by recounting the  ideas of the neurologists Samuel T Orton and Anna Gillingham in the 1920s and 1930s. Orton rejected the term ‘dyslexia’, preferring ‘strephosymbolia’. Somehow it didn’t catch on, but essentially it means ‘twisted symbols’. He hypothesised that in the non-dyslexic, the left-hemisphere processes identify the correct orientation of letters and letter sequences, but in the dyslexic this identification was somehow hampered by a problem with left-right brain communication. And decades later, in the 70s this hypothesis appeared to be validated, in that tests on children in which they were given ‘dichotic tasks’ – to identify varied auditory signals presented to different ears – revealed that impaired readers didn’t use left-hemisphere auditory processes in the same way as average readers. Other research showed that dyslexic readers showed ‘right-hemisphere superiority’, by which I think is meant that they favoured the right hemisphere for tasks usually favoured by the left.

Jacinta: Yes, weakness in the left hemisphere for handling linguistic tasks. But a lot of this was dismissed, or questioned, for being overly simplistic. You know, the old left-brain right-brain dichotomy that was in vogue in popular psychology some 30 years ago. Here’s what Wolf, very much a leading expert in this field, has to say on the latest findings (well, circa 2010):

In ongoing studies of the neural of typical reading, the research group at Georgetown University [a private research university in Washington DC] found that over time there is ‘progressive disengagement’ of the right hemisphere’s larger visual recognition system in reading words, and an increasing engagement of left hemisphere’s frontal, temporal, and occipital-temporal regions. This supports Orton’s belief that during development the left hemisphere takes over the processing of words (Wolf, p185).

Canto: Yes, that’s ‘typical reading’.  Children with dyslexia ‘used more frontal regions, and also showed much less activity in left posterior regions, particularly in the developmentally important left-hemisphere angular gyrus’. Basically, they used ‘auxiliary’ right-hemisphere regions to compensate for these apparently insufficiently functional left regions. It seems that they are using ‘memory’ strategies (from right-hemisphere structures) rather than analytic ones, and this causes highly predictable delays in processing. 

Jacinta: A number of brain regions are named in this explanation/exploration of the problems/solutions for dyslexic learners, and these names mean very little to us, so let’s provide some – very basic – descriptions of their known functions, and their positions in the brain. 

Canto: Right (or left):

The angular gyrus – which, like all other regions, is worth looking up on google images as to placement – is in a sense divided in two by the corpus callosum. Described as ‘horseshoe-shaped’, it’s in the parietal lobe, or more specifically ‘the posterior region of the inferior parietal lobe’. The parietal lobes are paired regions at the top and back of the brain, the superior sitting atop the inferior. The angular gyrus is the essential region for reading and writing, so it comes first. 

The occipital-temporal zone presumably implies a combo of the occipital and temporal lobes. The occipital is the smallest of the four lobes (occipital, temporal, parietal, frontal), each of which is ‘sided’, left and right. The junction of these two lobes with the parietal (TPO junction) is heavily involved in language processing as well as many other high-order functions.

Jacinta: Okay, that’ll do. It’s those delays you mention, the inability to attain automaticity, which characterises the dyslexic, and it appears to be caused by the use of a different brain circuitry, circuitry of the right-hemisphere. Best to quote Wolf again:

The dyslexic brain consistently employs more right-hemisphere structures than left-hemisphere structures, beginning with visual association areas and the occipital-temporal zone, extending through the right angular gyrus, supramarginal gyrus, and temporal regions. There is bilateral use of pivotal frontal regions, but this frontal activation is delayed (Wolf, p186).

Canto: The supramarginal gyrus is located just in front of and connected to the angular gyrus (a gyrus is anatomically defined as ‘a ridge or fold between two clefts on the cerebral surface in the brain). These two gyri, as mentioned above, make up the inferior parietal lobe. 

Jacinta: Wolf describes cumulative research from many parts of the world which tends towards a distinctive pattern in dyslexia, but also urges skepticism – the human brain’s complexity is almost too much for a mere human brain to comprehend. No two brains are precisely alike, and there’s unlikely to be a one-size-fits all cause or treatment, but explorations of this deficit are of course leading to a more detailed understanding of the brain’s processes involving particular types of object recognition, in visual and auditory terms. 

Canto: It’s certainly a tantalising field, and we’ve barely touched on the surface, and we’ve certainly not covered any, or very much of the latest research. One of the obvious questions is why some brains resort to different pathways from the majority, and whether there are upsides to offset the downsides. Is there some clue in the achievements of people known or suspected to be have been dyslexic in the past? I feel rather jealous of those researchers who are trying to solve these riddles….

References

Maryanne Wolf, Proust and the squid: the story and science of the reading brain, 2010

https://www.kenhub.com/en/library/anatomy/angular-gyrus

https://academic.oup.com/brain/article/126/9/2093/367492

https://en.wikipedia.org/wiki/Supramarginal_gyrus#:~:text=The%20supramarginal%20gyrus%20is%20part,of%20the%20mirror%20neuron%20system.

 

Written by stewart henderson

April 25, 2023 at 8:13 pm

good vibes: a conversation about voiced and unvoiced consonants and other speech noises

leave a comment »

and-carolines-first

“When I use a word,’ Humpty Dumpty said in rather a scornful tone, ‘it means just what I choose it to mean — neither more nor less.’

’The question is,’ said Alice, ‘whether you can make words mean so many different things.’

’The question is,’ said Humpty Dumpty, ‘which is to be master — that’s all.”
― Lewis Carroll, Through the Looking Glass

Canto: Okay, so now we’re getting into phonetics, is it? I’ve heard recently that some consonants are voiced, some unvoiced. Can you tell me what that means?

Jacinta: I think phonemics is the word. Or maybe phonology. Or maybe it is phonetics. Anyway don’t worry about the terminology, let’s look at your question. If I tell you that these five consonants are unvoiced: t, s, f, p, k, and that these five consonants are voiced: d, z, v, b, g, play around with those consonants in your mouth, that magnificent musical instrument, and see if you can work out the difference.

Canto: Okay, wow, I’ve noticed something. When I put my hand in front of my mouth and utter the first five, the unvoiced, I feel a blast of air hitting my hand. It doesn’t happen with the voiced consonants, or not nearly so much. Well, actually, no, ‘s’ isn’t like that, but the other four are. So that’s not it, though it’s an interesting thing to observe. But thinking voiced and unvoiced, that gets me somewhere. The voiced consonants all seem to be louder. Compare ‘z’ to ‘s’ for example. I seem to be forcing a sound out of my  mouth, a kind of vibration, whereas ‘s’ is just a ‘ssss’. A vibration too, of course, but softer. Unvoiced, I get that. ‘t’ seems to be just a mere touching of mouth parts and pushing air past them to make this very soft sound, ‘whereas ‘g’, ‘d’ and ‘b’ are more forceful, louder. And ‘v’, like ‘z’, makes a loud vibration. It’s funny, though – even as I make the sounds, and focus on how they’re made in my mouth, I’m damned if I can work out clearly the mechanics of those sounds. But of course researchers have got them thoroughly sorted out, right?

Jacinta: Well you’ve got the distinction between voiced and unvoiced pretty right. The key is that in a voiced consonant the vocal chords vibrate (actually, they’re vocal folds – they were mis-described way back in the day, actually as vocal cords, and the n
ame has stuck, with a musical embellishment). Here’s a trick: take the pair of consonants you mentioned, ‘s’ and ‘z’, and sound them out, while putting your hand to your throat, where the voice-box is…

Canto: But it’s not really a box?

Jacinta: The larynx, responsible for sound production among other things. A housing for the vocal folds. So what do you fee?

Canto: Yes I feel a strong vibration with ‘z’, and nothing, or the faintest shadow of a vibration with ‘s’.

Jacinta: So now try ‘f’ and ‘v’. Then t/d, p/b and k/g.

Canto: Got it, and never to be forgotten. So that’s all we need to know about voiced and unvoiced consonants?

Jacinta: It’s something that could be done with learners – without overdoing it. I’d only point it out to learners who are having trouble with those consonants. And it’s intrinsically interesting, of course.

Canto: So this raises questions about speech generally, and that great musical instrument you mentioned. Is the regular patterning of sound by our lips, our tongues and so on to make speech, is that very different for different languages, and is this a barrier for some people from different language backgrounds to learning English?

Jacinta: Well you know that there different ways of speaking in English, what we call accents and dialects, so there are different ways of saying English. For L2 learners, especially if they take up their L2 – in this case English – later rather than sooner, it’s unlikely that they’ll lose their L1 accent, but this is unlikely to affect comprehension if they can get the syntax right.

Canto: I’ve noticed that Vietnamese speakers in particular have trouble producing some English word endings. What’s that about?

Jacinta: The Viet language, like a lot of Asian languages, doesn’t have consonantal word endings. So that’s why they ‘miss’ plurals in speech (s,z), as well as saying ‘I lie’ for ‘I like’, and the like. They have the same problem with t, v, j and other consonants. They also have trouble with consonantal combos in the middle of words. And according to the ESLAN website, they ‘struggle greatly with the concept of combining purely alveolar sounds with post palatal ones’.

Canto: Eh?

Jacinta: Okay let’s learn this together. An alveolar consonant is one that employs the tongue against or close to the superior or upper alveolar ridge. That’s on the roof of the mouth getting close to the upper teeth, and it’s called alveolar because this is where the sockets of the teeth – the alveoli – are. You can feel a ridge there. English generally uses the tongue tip to produce apical consonants while French and Spanish, for example, uses the flat or blade of the tongue to produce laminal consonants.

Canto: So can you give me an example of an apical alveolar consonant?

Jacinta: Yes, the letter is called an alveolar nasal consonant. Try it, and note that the tongue tip rests on the alveolar ridge and sound is produced largely through the nasal cavity. The letter t is a voiceless alveolar stop consonant. It’s called a stop because it stops the airflow in the oral cavity, and it’s voiceless as there’s no vibration of the vocal folds. On the other hand the letter d is a voiced alveolar stop, differing from in that it involves a vibration of the vocal folds, a ‘voicing’.

Canto: Mmm, but I notice that with the tongue is a little less forward in where it hits the upper palate – behind the alveolar ridge, whereas with you’re almost at the base of the upper teeth.

Jacinta: Well, there are four specific variants of d. Your specific variant is postalveolar, whereas the other three are more forward – dental, denti-alveolar and alveolar.

Canto: So there’s this complex combinations of stops – stopping the airflow – voiced and unvoiced, where the vocal folds come into play (or not), nasalisation and other soundings, all of it pretty well unconscious, and delivered with various levels of stress (in both senses of the term). It’s all pretty amazing, and it’s no wonder that those interested in AI and robotics have realised that embodied consciousness is where it’s at, because we’re surely a long long way from developing a robot that can manage anything equivalent to human speech. And that’s just in terms of phonology, never mind syntax and morphology. But I’ve got a few other ‘sound’ terms knocking around in my head that I’d like explained. Tell me, what are fricatives and plosives?

Jacinta: Okay, well this is all about consonants. The letters p,t,k (unvoiced) and b,d,g (voiced) are all plosives in that ‘air flow from the lungs is interrupted by a complete closure being made in the mouth’. With fricatives – unvoiced f and s, voiced v and z – ‘the air passes through a narrow constriction that causes the air to flow turbulently and thus create a noisy sound’. I’m quoting from the World Atlas of Language Structures (WALS). So for example, the difference between rice and rise is that the former uses an unvoiced fricative and rise uses a voiced one – very peculiar because rise uses ‘s’ which sounds like ‘z’ and ‘rice’ uses ‘c’ which can lead learners astray with the ‘k’ sound. If you’re interested in learning more…

Canto: We both are.

Jacinta: WALS online is a great database with 151 chapters describing the structural features of the world’s languages – phonological, grammatical and lexical. It’s published by the Max Planck Institute for Evolutionary Anthropology and should be a great starting place for an all-round knowledge of human language.

Canto: Just another of those must-reads…
phonemic-chart

References:

http://wals.info

https://en.wikipedia.org/wiki/Voiceless_dental_and_alveolar_stops

https://en.wikipedia.org/wiki/Voiced_dental_and_alveolar_stops

https://www.englishclub.com/pronunciation/phonemic-chart.htm

http://englishspeaklikenative.com/resources/common-pronunciation-problems/vietnamese-pronunciation-problems/#error1

Written by stewart henderson

February 12, 2017 at 7:09 pm