a bonobo humanity?

‘Rise above yourself and grasp the world’ Archimedes – attribution

Posts Tagged ‘reasoning

interactional reasoning: modularity

leave a comment »

all explained

Mercier and Sperber write a lot about modules and modularity in their book on interactional reasoning and its evolution. I’ve passed this over as I find the concepts difficult and I’m not sure if understanding reasoning as a module, if it fits that description, is essential to the thesis about interactional reasoning and its superiority to the intellectualist model. However, as an autodidact who hates admitting intellectual defeat, I want to use this blog to fully understand stuff for its own sake – and I generally find the reward is worth the pain.

Modules and modularity are introduced in chapter 4 of The enigma of reason. The idea is that there’s a kind of inferential mechanism that we share with other species – something noted, more or less, by David Hume centuries ago. A sort of learning instinct, as argued by bird expert Peter Marler, but taken further in our species, as suggested by Stephen Pinker in The language instinct, and by other cognitive psychologists. 

This requires us to think more carefully about the term ‘instinct’. Marler saw it as ‘an evolved disposition to acquire a given type of knowledge’, such as songs for birds and language for humans. We’ve found that we have evolved predispositions to recognise faces, for example, and that there’s a small area in the inferior temporal lobes called the fusiform face area that plays a vital role in face recognition. 

However reasoning is surely more conceptual than perceptual. Interestingly, though, in learning how to do things ‘the right way’, that’s to say, normative behaviour, children often rely on perceptual cues from adults. When shown the ‘right way’ to do something by a person they trust, in a teacherly sort of way (this is called ostensive demonstration), an infant will tend to do it that way all the time, even though there may be many other perfectly acceptable ways to perform that act. They then try to get others to conform to this ostensively demonstrated mode of action. This suggests, perhaps, an evolved disposition for norm identification and acquisition. 

Face recognition, norm acquisition and other even more complex activities, such as reading, are gradually being hooked up to specific areas of the brain by researchers. They’re described as being on an instinct-expertise continuum, and according to Mercier and Sperber:

[they] are what in biology might typically be called modules: they are autonomous mechanisms with a history, a function, and procedures appropriate to this function. They should be viewed as components of larger systems to which they each make a distinct contribution. Conversely, the capacities of a modular system cannot be well explained without identifying its modular components and the way they work together.

A close reading of this passage should suggest to us that reasoning is one of those larger systems informed by many other mechanisms. The mind, according to the authors, is an articulated system of modules. The neuron is a module, as is the brain. The authors suggest that this is, at the very least, the most useful working hypothesis. Cognitive modules, in particular, need not be innate, but can harness biologically evolved modules for other purposes.

I’m not sure how much that clarifies, though it has helped me, for what it’s worth. And that’s all I’ll be posting on interactional reasoning, for now

Written by stewart henderson

February 6, 2020 at 5:29 pm

interactional reasoning: some stray thoughts

leave a comment »

wateva

I mentioned in my first post on this topic, bumble-bees have a fast-and-frugal way of obtaining the necessary from flowers while avoiding predators, such as spiders, which is essentially about ‘assessing’ the relative cost of a false negative (sensing there’s no spider when there is) and a false positive (sensing there’s a spider when there’s not). Clearly, the cost of a false negative is likely death, but a false positive also has a cost in wasting time and energy in the search for safe flowers. It’s better to be safe than sorry, up to a point. The bees still have a job to do, which is their raison d’être. So they’ve evolved to be wary of certain rough-and-ready signs of a spider’s presence. It’s not a fool-proof system, but it ensures that false positives are a little more over-determined than false negatives, enough to ensure overall survival, at least against one particular threat. 

When I’m walking on the street and note that a smoker is approaching, I have an immediate impulse, more or less conscious, to give her a wide berth, and even cross the road if possible. I suffer from bronchiectasis, an airways condition, which is much exacerbated by smoke, dust and other particulates. So it’s an eminently reasonable decision, or impulse (or something between the two). I must admit, though, that this event is generally accompanied by feelings of annoyance and disgust, and thoughts such as ‘smokers are such losers’ – in spite of the fact than, in the long long ago, I was a smoker myself.

Such negative thoughts, though, are self-preservative in much the same way as my avoidance measures. However, they’re not particularly ‘rational’ from the perspective of the intellectualist view of reason. I would do better, of course, in an interactive setting, because I’ve learned – through interactions of a sort (such as my recent reading of Siddhartha Mukherjee’s brilliant cancer book, which in turn sent me to the website of the US Surgeon-General’s report on smoking, and through other readings on the nature of addiction) – to have a much more nuanced and informed view. Stiil, my ‘smokers are losers’ disgust and disdain is perfectly adequate for my own everyday purposes!

The point is, of course, that reason evolved first and foremost to promote our survival, but further evolved, in our highly social species, to enable us to impress and influence others. And others have develped their own sophisticated reasons to impress and influence us. It follows that the best and most fruitful reasoning comes via interactions – collaborative or argumentative, in the best sense – with our peers. Of course, as I’ve stated it here, this is a hypothesis, and it’s quite hard to prove definitively. We’re all familiar with the apparently solitary geniuses – the Newtons, Darwins and Einsteins – who’ve transformed our understanding, and those who’ve been exposed to it will be impressed with the rigour of Aristotelian and post-Aristotelian logic, and the concepts of validity and soundness as the sine qua non of good reasoning (not to mention those fearfully absolute terms, rational and irrational). Yet these supposedly solitary geniuses often admitted themselves that they ‘stood on the shoulders of giants’, Einstein often mentioned his indebtedness to other thinkers, and Darwin’s correspondence was voluminous. Science is more than ever today a collaborative or competitively interactive process. Think also of the mathematician Paul Erdős whose obsessive interest in this most rational of activities led to a record number of collaborations.

These are mostly my own off-the-cuff thoughts. I’ll return to Mercier and Sperber’s writings on the evolution of reasoning and its modular nature next time.

Written by stewart henderson

February 1, 2020 at 11:11 am

interactional reasoning and confirmation bias – introductory

with one comment

I first learned about confirmation bias, and motivated reasoning, through my involvement with skeptical movements and through the Skeptics’ Guide to the Universe (SGU) podcast. As has been pointed out by the SGU and elsewhere, this confirmation bias, this strong tendency to acknowledge and support views, about any topic, that confirm our own, and to dismiss or avoid listening to views from the opposite side, is a feature of liberal and conservative thought in equal measure, as well as being as much a feature of highly credentialed public intellectuals’ thought as it is for the thinking of your average unlearned sot. The problem of confirmation bias, this ‘problem in our heads’, has been blamed for the current social media maladies we supposedly suffer from, creating increasingly partisan echo-chambers in which we allow ourselves, or are ‘driven by clicks’, to be shut off from opposing views and arguments.

But is confirmation bias quite the bogey it’s generally claimed to be? Is it possibly an evolved feature of our reasoning? This raises fundamental questions about the very nature of what we call reason, and how and why it evolved in the first place. Obviously I’m not going to be able to deal with this Big Issue in the space of the short blog pieces I’ve been writing recently, so it’ll be covered by a number of posts. And, just as obviously, my questioning of confirmation bias hasn’t sprung from my own somewhat limited genius – it pains me to admit – but from some current reading material.

The enigma of reason: a new theory of human understanding, by research psychologists Hugo Mercier and Dan Sperber, is a roolly important and timely piece of work, IMHO. So important that I launch into any attempt to summarise it with much trepidation. Anyway, their argument is that reasoning is largely an interactive tool, and evolved as such. They contrast the interactive view of reason with the ‘intellectualist’ view, which begins with Aristotle and his monumentally influential work on logic and logical fallacies. So with that in mind, they tackle the issue of confirmation bias in chapter 11 of their book, entitled ‘Why is reason biased?’

The authors begin the chapter with a cautionary tale, of sorts. Linus Pauling, winner of two Nobel Prizes and regarded by his peers as perhaps the most brilliant biochemist of the 20th century, became notoriously obsessed with the healing powers of vitamin C, in spite of mounting evidence to the contrary, raising the question as to how such a brilliant mind could get it so wrong. And perhaps a more important question – if such a mind could be capable of such bias, what hope is there for the rest of us?

So the authors look more closely at why bias occurs. Often it’s a matter of ‘cutting costs’, that is, the processing costs of cognition. An example is the use of the ‘availability heuristic’, which Daniel Kahneman writes about in Thinking fast and slow, where he also describes it as WYSIWTI (what you see is what there is). If, because you work in a hospital, you see many victims of road accidents, you’re liable to over-estimate the number of road accidents that occur in general. Or, because most of your friends hold x political views, you’ll be biased towards thinking that more people hold x political views than is actually the case. It’s a kind of fast and lazy form of inferential thinking, though not always entirely unreliable. Heuristics in general are described as ‘fast and frugal’ ways of thinking, which save a lot in cognitive cost while losing a little in reliability. In fact, as research has shown (apparently) sometimes heuristics can be more reliable than pains-taking, time-consuming analysis of a problem.

One piece of research illustrative of fast-and-frugal cognitive mechanisms involves bumble-bees and their strategies to avoid predators (I won’t give the details here). Why not? Reasoning as an evolved mechanism is surely directed first and foremost at our individual survival. To be more preservative than right. It follows that some such mechanism, whether we call it reasoning or not, exists in more or less complex form in more or less complex organisms. It also follows from this reasoning-for-survival outlook, that we pay far more attention to something surprising that crops up in our environment than routine stuff. As the authors point out:

Even one-year-old babies expect others to share their surprise. When they see something surprising, they point toward it to share their surprise with nearby adults. And they keep pointing until they obtain the proper reaction or are discouraged by the adults’ lack of reactivity.

Mercier & Sperber, The enigma of reason, p210

Needless to say, the adults’ reactions in such an everyday situation are crucial for the child – she learns that what surprised her is perhaps not so surprising, or is pleasantly surprising, or is dangerous, etc. All of this helps us in fast-and-frugal thinking from the very start.

Surprises – events and information that violates our expectations – are always worth paying attention to, in everyday life, for our survival, but also in our pursuit of accurate knowledge of the world, aka science. More about that, and confirmation bias, in the next post.

Reference

The enigma of reason: a new theory of human understanding, by Hugo Mercier & Dan Sperber, 2017

Written by stewart henderson

January 28, 2020 at 2:13 pm

inference in the development of reason, and a look at intuition

leave a comment »

various more or less feeble attempts to capture intuition 

Many years ago I spent quite a bit of time getting my head around formal logic, filling scads of paper with symbols whose meanings I’ve long since forgotten, obviously through disuse.
I recognise that logic has its uses, tied with mathematics, e.g. in developing algorithms in the field of information technology, inter alia, but I can’t honestly see its use in everyday life, at least not in my own. Yet logic is generally valued as the sine qua non of proper reasoning, as far as I can see.
Again, though, in the ever-expanding and increasingly effective field of cognitive psychology, reason and reasoning as concepts are undergoing massive and valuable re-evaluation. As Hugo Mercier and Dan Sperber argue in The enigma of reason, they have benefitted (always arguably) from being taken out of the hands of logicians and (most) philosophers and examined from an evolutionary and psychological perspective. Charles Darwin read Hume on inference and reasoning and commented in his diary that scientists should consider reason as gradually developed, that’s to say as an evolved trait. So reasoning capacities should be found in other complex social mammals to varying degrees.    

An argument has been put forward that intuition is a process that fits between inference and reason, or that it represents a kind of middle ground between unconscious inference and conscious reasoning. Daniel Kahneman, for example, has postulated three cognitive systems – perception, intuition (system 1 cognition) and reasoning (system 2). Intuition, according to this hypothesis, is the ‘fast’, experience based, rule-of-thumb type of thinking that often gets us into trouble, requiring the slower ‘think again’ evaluation (which is also far from perfect) to come to the rescue. However, Mercier and Sperber argue that intuition is a vague term, defined more by what it lacks than by any defining characteristics. It appears to be a slightly more conscious process of acting or thinking by means of a set of inferences. To use a personal example, I’ve done a lot of cooking over the years, and might reasonably describe myself as an intuitive cook – I know from experience how much of this or that spice to add, how to reduce a sauce, how to create something palatable with limited ingredients and so forth. But this isn’t the product of some kind of intuitive mechanism, rather it’s the product of a set of inferences drawn from trial-and-error experience that is more or less reliable. Mercier and Sperber describe this sense of intuitiveness as a kind of metacognition, or ‘cognition about cognition’, in which we ‘intuit’ that doing this, or thinking that, is ‘about right’, as when we feel or intuit that someone is in a bad mood, or that we left our keys in room x rather than room y. This feeling lies somewhere between consciousness and unconsciousness, and each intuition might vary considerably on that spectrum, and in terms of strength and weakness. Such intuitions are certainly different from perceptions, in that they are feelings we have about something. That is, they belong to us. Perceptions, on the other hand, are largely imposed on us by the world and by our evolved receptivity to its stimuli.

All of this is intended to take us, or maybe just me, on the path towards a greater understanding of conscious reasoning. There’s a long way to go…

References

The enigma of reason, a new theory of human understanding, by Hugo Mercier and Dan Sperber, 2017

Thinking, fast and slow, by Daniel Kahneman, 2011

Written by stewart henderson

December 4, 2019 at 10:45 pm