an autodidact meets a dilettante…

‘Rise above yourself and grasp the world’ Archimedes – attribution

Archive for the ‘philosophy’ Category

the evolution of reason: intellectualist v interactivist

leave a comment »

 

 

In The Enigma of Reason, cognitive psychologists Hugo Mercier and Dan Sperber ask the question – What is reason for? I won’t go deeply into their own reasoning, I’m more interested in the implications of their conclusions, if correct – which I strongly suspect they are.

They looked at two claims about reason’s development, the intellectualist claim, which I might associate with Aristotelian and symbolic logic, premises and conclusions, and logical fallacies as pointed out by various sceptical podcasts and websites (and this can also be described as an individualist model of reasoning), and the interactionist model, in which reason is most effectively developed collectively.

In effect, the interactionist view is claiming that reason evolved in an interactionist environment. This suggests that it is language-dependent, or that it obviously couldn’t have its full flowering without language. Mercier and Sperber consider the use of reason in two forms – justificatory and argumentative. Justificatory reasoning tends to be lazy and easily satisfied, whereas it is in the realm of argument that reason comes into its own. We can see the flaws in the arguments of others much more readily than we can our own. This accords with the biblical saying about seeing motes in the eyes of others while being blind to the bricks in our own – or something like that. It also accords with our well-attested over-estimation of ourselves, in terms of our looks, our generosity, our physical abilities and so on.

I’m interested in this interactionist view because it also accords with my take on collaboration, participatory democracy and the bonobo way. Bonobos of course don’t have anything like human reason, not having language, but they do work together more collectively than chimps (and chimp-like humans) and show a feeling towards each other which some researchers have described as ‘spiritual’. For me, a better word would be ‘sympathetic’. Seeing the value in others’ arguments helps to take us outside of ourselves and to recognise the contribution others make to our thinking. We may even come to realise how much we rely on others for our personal development, and that we are, for better or worse, part of a larger, enriching whole. A kind of mildly antagonistic but ultimately fulfilling experience.

An important ingredient to the success of interactionist reasoning is the recognition of and respect for difference. That lazy kind of reasoning we engage in when left to ourselves can be exacerbated when our only interactions are with like-minded people. Nowadays we recognise this as a problem with social media and their algorithms. The feelings of solidarity we get with that kind of interaction can of course be very comforting but also stultifying, and they don’t generally lead to clear reasoning. For many, though, the comfort derived from solidarity outweighs the sense of clarity you might, hopefully, get from being made to recognise the flaws in your own arguments. This ghettoisation of reason, like other forms of ghettoisation, is by and large counter-productive. The problem is to prevent this from happening while reducing the ‘culture shock’ that this might entail. Within our own WEIRD (from Western Educated Industrial Rich Democratic countries) culture, where the differences aren’t so vast, being challenged by contrary arguments can be stimulating, even exhilarating. Here’s what the rich pre-industrialist Montaigne had to say on the matter:

The study of books is a languishing and feeble motion that heats not, whereas conversation teaches and exercises at once. If I converse with a strong mind and a rough disputant, he presses upon my flanks, and pricks me right and left; his imaginations stir up mine; jealousy, glory, and contention, stimulate and raise me up to something above myself; and acquiescence is a quality altogether tedious in discourse.

Nevertheless, I’ve met people who claim to hate arguments. They’re presumably not talking about philosophical discourse, but they tend to lump all forms of discord together in a negative basket. Mercier and Sperber, however, present a range of research to show that challenges to individual thinking have an improving effect – which is a good advert for diversity.  But even the most basic interactions, for example between mother and child, show this effect. A young child might be asked why she took a toy from her sibling, and answer ‘because I want it’. Her mother will point out that the sibling wants it too, and/or had it first. The impact of this counter-argument may not be immediate, but given normal childhood development, it will be the beginning of the child’s road to developing more effective arguments through social interaction. In such an interactive world, reasons need to much more than purely selfish.

The authors give examples of how the the most celebrated intellects can go astray when insufficiently challenged, from dual Nobel prize-winner Linus Pauling’s overblown claims about vitamin C to Alphonse Bertillon’s ultra-convoluted testimony in favour of Albert Dreyfus’ guilt, to Thomas Jefferson’s absurdly tendentious arguments against emancipation. They also show how the standard fallacious arguments presented in logic classes can be valid under particular circumstances. Perhaps most convincingly they present evidence of how group work in which contentious topics were discussed resulted in improvements in individual essays. Those whose essay-writing was preceded by such group discussion produced more complex arguments for both sides than did those who simply read philosophical texts on the issues.

It might seem strange that a self-professed loner like me should be so drawn to an interactionist view of reason’s development. The fact is, I’ve always seen my ‘lonerdom’ as a failing, which I’ve never tried very hard to rectify. Instead, I’ve compensated by interacting with books and, more recently, podcasts, websites and videos. They’re my ‘people’, correcting and modifying my own views thorough presenting new information and perspectives (and yes, I do sometimes argue and discuss with flesh-and-blood entities). I’ve long argued that we’re the most socially constructed mammals on the planet, but Mercier and Sperber have introduced me to a new word – hypersocial – which packs more punch. This hypersocial quality of humans has undoubtedly made us, for better or worse, the dominant species on the planet. Other species can’t present us with their viewpoints, but we can at least learn from the co-operative behaviours of bonobos, cetaceans, elephants and corvids, to name a few. That’s interaction of a sort. And increased travel and globalisation of communications means we can learn about other cultures and how they manage their environments and how they have coped, or not, with the encroachments of the dominant WEIRD culture.

When I say ‘we’ I mean we, as individuals. The authors of The enigma of reason reject the idea of reason as a ‘group-level adaptation’. The benefits of interactive reason accrue to the individual, and of course this can be passed on to other receptive individuals, but the level of receptivity varies enormously. Myside bias, the default position from our solipsistic childhood, has the useful evolutionary function of self-promotion, even survival, against the world, but our hypersocial human world requires effective interaction. That’s how Australian Aboriginal culture managed to thrive in a set of sub-optimal environments for tens of thousands of years before the WEIRDs arrived, and that’s how WEIRDs have managed to transform those environments, creating a host of problems along with solutions, in a story that continues….

Reference

H Mercier & D Sperber, The enigma of reason, 2017

Written by stewart henderson

August 13, 2021 at 3:28 pm

on blogging: a personal view

leave a comment »

I have a feeling – I haven’t researched this – that the heyday of blogging is over. Even I rarely read blogs these days, and I’m a committed blogger, and have been since the mid 2000s. I tend to read books and science magazines, and some online news sites, and I listen to podcasts and watch videos – news, historical, academic, etc. 

should read more blogs. Shoulda-coulda-woulda. Even out of self-interest – reading and commenting on other blogs will drive traffic to my own, as all the advisers say. Perhaps one of the problems is that there aren’t too many blogs like mine – they tend to be personal interest or lifestyle blogs, at least going by those bloggers who ‘like’ my blog, which which gives me the distinct impression that those ‘likers’ are just trying to drive traffic to their blogs, as advised. But the thing is, I like to think of myself as a real writer, whatever that is. Or a public intellectual, ditto. 

However, I’ve never been published in a real newspaper, apart from one article 25 years ago in the Adelaide Review (the only article I’ve ever submitted to a newspaper), which led to my only published novel, In Elizabeth. But I’ve never really seen myself as a fiction writer. I’m essentially a diarist turned blogger – and that transition from diary writing to blogging was transformational, because with blogging I was able to imagine that I had a readership. It’s a kind of private fantasy of being a public intellectual.

I’ve always been inspired by my reading, thinking ‘I could do that”. Two very different writers, among many others, inspired me to keep a diary from the early 1980s, to reflect on my own experiences and the world I found myself in: Franz Kafka and Michel de Montaigne. Montaigne’s influence, I think, has been more lasting, not in terms of what he actually wrote, but his focus on the wider world, though it was Kafka that was the most immediate influence back in those youthful days, when I was still a little more self-obsessed. 

Interestingly, though, writing about the world is a self-interested project in many ways. It’s less painful, and less dangerous. I once read that the philosopher and essayist Bertrand Russell, who had attempted suicide a couple of times in his twenties, was asked about those days and how he survived them. ‘I stopped thinking about myself and thought about the world’, he responded.

I seem to recall that Montaigne wrote something like ‘I write not to find out what I think about a topic, but to create that thinking.’ I strongly identify with that sentiment. It really describes my life’s work, such as it is. Considering that, from all outside perspectives, I’m deemed a failure, with a patchy work record, a life mostly spent below the poverty line and virtually no readership as a writer, I’m objective enough and well-read enough to realise that my writing stands up pretty well against those who make a living from their works. Maybe that’s what prevents me from ever feeling suicidal.  

Writing about the world is intrinsically rewarding because it’s a lifelong learning project. Uninformed opinions are of little value, so I’ve been able to take advantage of the internet – which is surely the greatest development in the dissemination of human knowledge since the invention of writing – to embark on this lifelong learning at very little cost. I left school quite young, with no qualifications to speak of, and spent the next few years – actually decades – in and out of dead-end jobs while being both attracted and repelled by the idea of further academic study. At first I imagined myself as a legend in my lunch-time – the smartest person I knew without academic qualifications of any kind. And of course I could cite my journals as proof. These were the pre-internet days of course, so the only feedback I got was from the odd friend to whom I read or showed some piece of interest. My greatest failing, as a person rather than a writer, is my introversion. I’m perhaps too self-reliant, too unwilling or unable to join communities. The presence of others rather overwhelms me. I recall reading, in a Saul Bellow novel, of the Yiddish term trepverter – meaning the responses to conversations you only think of after the moment has passed. For me, this trepverter experience takes up much of my time, because the responses are lengthy, even never-ending. It’s a common thing, of course, Chekhov claimed that the best conversations we have are with ourselves, and Adam Smith used to haunt the Edinburgh streets in his day, arguing with himself on points of economics and probably much more trivial matters. How many people I’ve seen drifting along kerbsides, shouting and gesticulating at some invisible, tormenting adversary.

Anyway, blogging remains my destiny. I tried my hand at podcasting, even vodcasting, but I feel I’m not the most spontaneous thinker, and my voice catches in my throat due to my bronchiectasis – another reason for avoiding others. Yet I love the company of others, in an abstract sort of way. Or perhaps I should say, I like others, more than I like company – though I have had great experience in company with others. But mostly I feel constrained in company, which makes me dislike my public self. That’s why I like reading – it puts me in an idealised company with the writer. I must admit though, that after my novel was published, and also as a member of the local humanist society, I gave a few public talks or lectures, which I enjoyed immensely – I relish nothing more than being the centre of attention. So it’s an odd combo of shyness and self-confidence that often leaves me scratching my own head. 

This also makes my message an odd one. I’m an advocate of community, and the example of community-orientated bonobos, who’s also something of a loner, awkward with small-talk, wanting to meet people, afraid of being overwhelmed by them. Or of being disappointed.

Here’s an example. Back in the eighties, I read a book called Melanie. It was a collection of diary writings of a young girl who committed suicide, at age 18 as I remember. It was full of light and dark thoughts about family, friends, school and so forth. She came across as witty, perceptive, mostly a ‘normal’ teenager, but with this dark side that seemed incomprehensible to herself. Needless to say, it was an intimate, emotional and impactful reading experience. I later showed the book to a housemate, a student of literature, and his response shocked me. He dismissed it out of hand, as essentially childish, and was particularly annoyed that the girl should have a readership simply because she had suicided. He also protested, rather too much, I felt, about suicide itself, which I found revealing. He found such acts to be both cowardly and selfish. 

I didn’t argue with him, though there was no doubt a lot of trepverter going on in my head afterwards. For the record, I find suicides can’t be easily generalised, motives are multifactorial, and our control over our own actions are often more questionable than they seem. In any case human sympathy should be in abundant supply, especially for the young. 

So sometimes it feels safer to confide in an abstract readership, even a non-existent one. I’ll blog on, one post after another. 

Written by stewart henderson

March 30, 2021 at 3:40 pm

reading matters 2

leave a comment »

The beginning of infinity by David Deutsch (quantum physicist and philosopher, as nerdy as he looks)

Content hints

  • science as explanations with most reach, conjecture as origin of knowledge, fallibilism, the solubility of problems, the open-endedness of explanation, inspiration is human but perspiration can be automated, all explanations give birth to new problems, emergent phenomena provide clues about other emergent phenomena, the jump to universality as systems converge and cross-fertilise, AI and the essential problem of creativity, don’t be afraid of infinity and the unlimited growth of knowledge, optimism is the needful option, better Athens than Sparta any day, there is a multiverse, the Copenhagen interpretation and positivism as bad philosophy, political institutions need to create new options, maybe beauty really is objective, static societies use anti-rational memes (e.g gods) while dynamic societies develop richer, critically valuable ones, creativity has enabled us to transcend biological evolution and to attain new estates of knowledge, Jacob Bronowski The Ascent of Man and Karl Popper as inspirations, the beginning….

Written by stewart henderson

June 18, 2020 at 11:46 pm

progressivism: the no-alternative philosophy

leave a comment »

Canto: So here’s the thing – I’ve occasionally been asked about my politics and I’ve been a little discomfited about having to describe them in a few words, and I’ve even wondered if I could describe them effectively to myself.

Jacinta: Yes I find it easier to be sure of what I’m opposed to, such as bullies or authoritarians, which to me are much the same thing. So that means authoritarian governments, controlling governments and so forth. But I also learned early on that the world was unfair, that some kids were richer than others, smarter than others, better-looking than others, through no fault or effort of their own. I was even able to think through this enough to realise that even the kind kids and the nasty ones, the bullies and the scaredy-cats, didn’t have too much choice in the matter. So I often wondered about a government role in making things a bit fairer for those who lost out in exactly where, or into whose hands, they were thrown into the world.

Canto: Well you could say there’s a natural diversity in all those things, intelligence, appearance, wealth, capability and so forth… I’m not sure if it’s a good thing or a bad thing, it just is. I remember once answering that question, about my politics, by describing myself as a pluralist, and then later being disappointed at my self-description. Of course, I wouldn’t want to favour the opposite – what’s that, singularism? But clearly not all differences are beneficial – extreme poverty for example, or its opposite…

Jacinta: You wouldn’t want to be extremely wealthy?

Canto; Well okay I’ve sometimes fantasised, but mainly in terms of then having more power to make changes in the world. But I’m thinking of the differences that disadvantage us as a group, as a political entity. And here’s one thing I do know about politics. We can’t live without it. We owe our success as a species, for what it’s worth, to our socio-political organisation, something many libertarians seem to be in denial about.

Jacinta: Yes, humans are political animals, if I may improve upon Aristotle. But differences that disadvantage us. Remember eugenics? Perhaps in some ways it’s still with us. Prospective parents might be able to abort their child if they can find out early on that it’s – defective in some way.

Canto: Oh dear, that’s a real can of worms, but those weren’t the kind of differences I was thinking about. Since you raise the subject though, I would say this is a matter of individual choice, but that, overall, ridding the world of those kinds of differences – intellectual disability, dwarfism, intersex, blindness, deafness and so on – wouldn’t be a good thing. But of course that would require a sociopolitical world that would agree with me on that and be supportive of those differences.

Jacinta: So you’re talking about political differences. Or maybe cultural differences?

Canto: Yes but that’s another can of worms. It’s true that multiculturalism can expand our thinking in many ways, but you must admit that there are some heavy cultures, that have attitudes about the ‘place of women’ for example, or about necessary belief in their god…

Jacinta: Or that taurans make better lovers than geminis haha.

Canto: Haha, maybe. Some false beliefs have more serious consequences than others. So multiculturalism has its positives and negatives, but you want the dominant culture, or the mix of cultures that ultimately forms a new kind of ‘creole’ overarching culture, to be positive and open. To be progressive. That’s the key word. There’s no valid alternative to a progressive culture. It’s what has gotten us where we are, and that’s not such a bad place, though it’s far from perfect, and always will be.

Jacinta: So progressiveness good, conservativism bad? Is that it?

Canto: Nothing is ever so simple, but you’re on the right track. Progress is a movement forward. Sometimes it’s a little zigzaggy, sometimes two forward one back. I’m taking my cue from David Deutsch’s book The beginning of infinity, which is crystallising much I’ve thought about politics and culture over the years, and of the role and meaning of science, which as you know has long preoccupied me. Anyway, the opposite of progress is essentially stasis – no change at all. Our former conservative Prime Minister John Howard was fond of sagely saying ‘if it ain’t broke, don’t fix it’, as a way of avoiding the prospect of change. But it isn’t just about fixing, it’s rather more about improving, or transcending. Landline phones didn’t need fixing, they were a functional, functioning technology. But a new technology came along that improved upon it, and kept improving and added internet technology to its portability. We took a step back in our progress many decades ago, methinks, when we abandoned the promise of electrified modes of travel for the infernal combustion engine, and it’s taking us too long to get back on track, but I’m confident we’ll get there eventually. ..

Jacinta: I get you. Stasis is this safe option, but in fact it doesn’t lead anywhere. We’d be sticking with the ‘old’ way of doing things, which takes us back much further than just the days of landlines, but before any recognisable technology at all. Before using woven cloth, before even using animal skins and fire to improve our chances of survival.

Canto: So it’s not even a safe option. It’s not a viable option at all. You know how there was a drastic drop in the numbers of Homo sapiens some 70,000 years ago – we’ll probably never know how close we came to extinction. I’d bet my life it was some innovation that only our species could have thought of that enabled us to come out of it alive and breeding.

Jacinta: And some of our ancestors would’ve been dragged kicking and screaming towards accepting that innovation. I used to spend time on a forum of topical essays where the comments were dominated by an ‘anti-Enlightenment’ crowd, characters who thought the Enlightenment – presumably the eighteenth century European one (but probably also the British seventeenth century one, the Scottish one, and maybe even the Renaissance to boot) – was the greatest disaster ever suffered by humanity. Needless to say, I soon lost interest. But that’s an extreme example (I think they were religious nutters).

Canto: Deutsch, in a central chapter of The beginning of infinity, compares ancient Athens and Sparta, even employing a Socratic dialogue for local colour. The contrast isn’t just between Athens’ embracing of progress and Sparta’s determination to maintain stasis, but between openness and its opposite. Athens, at its all-too-brief flowering, encouraged philosophical debate and reasoning, rule-breaking artistry, experimentation and general questioning, in the process producing famous dialogues, plays and extraordinary monuments such as the Parthenon. Sparta on the other hand left no legacy to build on or rediscover, and all that we know of its politico-social system comes from non-Spartans, so that if it has been misrepresented it only has itself to blame!

Jacinta: Yet it didn’t last.

Canto: Many instances of that sort of thing. In the case of Athens, its disastrous Syracusan adventure, its ravagement by the plague, or a plague, or a series of plagues, and the Peloponnesian war, all combined to permanently arrest its development. Contingent events. Think too of the Islamic Golden Age, a long period of innovation in mathematics, physics, astronomy, medicine, architecture and much else, brought to an end largely by the Mongol invasions, and the collapse of the Abbasid caliphate but also by a political backlash towards stasis, anti-intellectualism and religiosity, most often associated with the 12th century theologian Abu Hamid al-Ghazali.

Jacinta: Very tragic for our modern world. So how do we guard against the apostles of stasis? By the interminable application of reason? By somehow keeping them off the reins of power, since those apostles will always be with us?

Canto: Not by coercion, no. It has to be a battle of ideas, or maybe I shouldn’t use that sort of male lingo. A demonstration of ideas, in the open market. A demonstration of their effectiveness for improving our world, which means comprehending that world at an ever-deeper, more comprehensive level.

Jacinta: Comprehensively comprehending, that seems commendably comprehensible. But will this improve the world for us all – lift all boats, as Sam Harris likes to say?

Canto: Well, since you mention Harris, I totally agree with him that reason, and science which is so clearly founded on reason, is just as applicable to the moral world, to pointing the way to and developing the best and richest life we all can live, as it is to technology and our deepest understanding of the universe, the multiverse or whatever our fundamental reality happens to be. So we need to keep on developing and building on that science, and communicating it and applying it to the human world and all that it depends upon and influences.

References

The beginning of infinity, by David Deutsch, 2012

https://en.wikipedia.org/wiki/Parthenon

https://www.thenewatlantis.com/publications/why-the-arabic-world-turned-away-from-science

Written by stewart henderson

May 3, 2020 at 4:36 pm

interactional reasoning: some stray thoughts

leave a comment »

wateva

I mentioned in my first post on this topic, bumble-bees have a fast-and-frugal way of obtaining the necessary from flowers while avoiding predators, such as spiders, which is essentially about ‘assessing’ the relative cost of a false negative (sensing there’s no spider when there is) and a false positive (sensing there’s a spider when there’s not). Clearly, the cost of a false negative is likely death, but a false positive also has a cost in wasting time and energy in the search for safe flowers. It’s better to be safe than sorry, up to a point. The bees still have a job to do, which is their raison d’être. So they’ve evolved to be wary of certain rough-and-ready signs of a spider’s presence. It’s not a fool-proof system, but it ensures that false positives are a little more over-determined than false negatives, enough to ensure overall survival, at least against one particular threat. 

When I’m walking on the street and note that a smoker is approaching, I have an immediate impulse, more or less conscious, to give her a wide berth, and even cross the road if possible. I suffer from bronchiectasis, an airways condition, which is much exacerbated by smoke, dust and other particulates. So it’s an eminently reasonable decision, or impulse (or something between the two). I must admit, though, that this event is generally accompanied by feelings of annoyance and disgust, and thoughts such as ‘smokers are such losers’ – in spite of the fact than, in the long long ago, I was a smoker myself.

Such negative thoughts, though, are self-preservative in much the same way as my avoidance measures. However, they’re not particularly ‘rational’ from the perspective of the intellectualist view of reason. I would do better, of course, in an interactive setting, because I’ve learned – through interactions of a sort (such as my recent reading of Siddhartha Mukherjee’s brilliant cancer book, which in turn sent me to the website of the US Surgeon-General’s report on smoking, and through other readings on the nature of addiction) – to have a much more nuanced and informed view. Stiil, my ‘smokers are losers’ disgust and disdain is perfectly adequate for my own everyday purposes!

The point is, of course, that reason evolved first and foremost to promote our survival, but further evolved, in our highly social species, to enable us to impress and influence others. And others have develped their own sophisticated reasons to impress and influence us. It follows that the best and most fruitful reasoning comes via interactions – collaborative or argumentative, in the best sense – with our peers. Of course, as I’ve stated it here, this is a hypothesis, and it’s quite hard to prove definitively. We’re all familiar with the apparently solitary geniuses – the Newtons, Darwins and Einsteins – who’ve transformed our understanding, and those who’ve been exposed to it will be impressed with the rigour of Aristotelian and post-Aristotelian logic, and the concepts of validity and soundness as the sine qua non of good reasoning (not to mention those fearfully absolute terms, rational and irrational). Yet these supposedly solitary geniuses often admitted themselves that they ‘stood on the shoulders of giants’, Einstein often mentioned his indebtedness to other thinkers, and Darwin’s correspondence was voluminous. Science is more than ever today a collaborative or competitively interactive process. Think also of the mathematician Paul Erdős whose obsessive interest in this most rational of activities led to a record number of collaborations.

These are mostly my own off-the-cuff thoughts. I’ll return to Mercier and Sperber’s writings on the evolution of reasoning and its modular nature next time.

Written by stewart henderson

February 1, 2020 at 11:11 am

interactional reasoning: cognitive or myside bias?

leave a comment »

In the previous post on this topic, I wrote of surprise as a motivator for questioning what we think we know about our world, a shaking of complacency. In fact we need to pay attention to the unexpected, because of its greater potential for harm (or benefit) than the expected. It follows that expecting the unexpected, or at least being on guard for it, is a reasonable approach. Something which disconfirms our expectations, can teach us a lot – it might be the ugly fact that undermines a beautiful theory. So, it’s in our interest to watch out for, and even seek out, information that undermines our current knowledge – though it might be pointed out that it’s rarely the person who puts forward a theory who discovers the inconvenient data that undermines it. The philosopher Karl Popper promoted ‘falsificationism’ as a way of testing and tightening our knowledge, and it’s interesting that the very title of his influential work Conjectures and refutations speaks to an interactive approach towards reasoning and evaluating ideas. 

In The enigma of reason, Mercier and Sperber argue that confirmation bias can best be explained by the fact that, while most of our initial thinking about a topic is of the heuristic, fast-and-frugal kind, we then spend a great deal more time, when asked about our reasoning re a particular decision, developing post-hoc justifications. Psychological research has borne this out. The authors suggest that this is more a defence of the self, and of our reputation. They suggest that it’s more of a myside bias than a confirmation bias. Here’s an interesting example of the effect:

Deanna Kuhn, a pioneering scholar of argumentation and cognition, asked participants to take a stand on various social issues – unemployment, school failure and recidivism. Once the participants had given their opinion, they were asked to justify it. Nearly all participants obliged, readily producing reasons to support their point of view. But when they were asked to produce counterarguments to their own view, only 14 percent were consistently able to do so, most drawing a blank instead.

Mercier & Sperber, The enigma of reason, pp213-4

The authors give a number of other examples of research confirming this tendency, including one in which the participants were divided into two groups, one with high political knowledge and another with limited knowledge. The low-knowledge group were able to provide twice as many arguments for their view of an issue as arguments against, but the high-knowledge performed even more poorly, being unable to provide any arguments against. ‘Greater political knowledge only amplified their confirmation bias’. Again, the reason for this appears to be reputational. The more justifications you can find for your views and decisions, the more your reputation is enhanced, at least in your own mind. There seems no obvious benefit in finding arguments against yourself.

All of this seems very negative, and even disturbing. And it’s a problem that’s been known about for centuries. The authors quote a great passage from Francis Bacon’s Novum Organum:

The human understanding when it has once adopted an opinion… draws all things else to support and agree with it. And though there be a greater number and weight of instances to be found on the other side, yet these it either neglects and despises, or else by some distinction sets aside and rejects, in order that by this great and pernicious predetermination the authority of its former conclusions may remain inviolate.

Yet it isn’t all bad, as we shall see in future posts…

Reference

Hugo Mercier and Dan Sperber, The enigma of reason, 2017

Written by stewart henderson

January 29, 2020 at 1:44 pm

preliminary thoughts on reasoning and reputation

leave a comment »

 

In my youth I learned about syllogisms and modus ponens and modus tollens and the invalidity of arguments ad hominem and reductio ad absurdum, and valid but unsound arguments and deduction and induction and all the rest, and even wrote pages filled with ps and qs to get myself clear about it all, and then forgot about it. All that stuff was only rarely applied to everyday life, where, it seemed, our reasoning, though important, was more implicit and intuitive. What I did notice though – being a bit of a loner – was that when I did have a disagreement with someone which left a bitter taste in my mouth, I would afterwards go over the argument in my head to make it stronger, more comprehensive, more convincing and bullet-proof (and of course I would rarely get the chance to present this new and improved version). But interestingly, as part of this process, I would generally make my opponent’s argument stronger as well, even to the point of conceding some ground to her and coming to a reconciliation, out of which both of us would be reputationally enhanced.

In fact, I have to say I spend quite a bit of time having these imaginary to-and-fros, not only with ‘real people’, but often with TV pundits or politicians who’ll never know of my existence. To take another example, when many years ago I was accused of a heinous crime by a young lad to whom I was a foster-carer, I spent excessive amounts of time arguing my defence against imaginary prosecutors of fiendish trickiness, but the case was actually thrown out without my ever having, or being allowed, to say a word in a court-house, other than ‘not guilty’.

So, is all this just so much wasted energy? Well, of course not. For example, I’ve used all that reflection on the court case to give, from my perspective, a comprehensive account of what happened and why, of my view of the foster-care system and its deficiencies, of the failings of the police in the matter and so forth, to friends and interested parties, as well as in writing on my blog. And it’s the same with all the other conversations with myself – they’ve sharpened my view of the matter in hand, of people’s motivations for holding different views (or my view of their motivations), they’ve caused me to engage in research which has tightened or modified my position, and sometimes to change it altogether.

All of this is preliminary to my response to reading The enigma of reason, by Dan Sperber and Hugo Mercier, which I’m around halfway through. One of the factors they emphasise is this reputational aspect of reason. My work to justify myself in the face of a false allegation was all about restoring or shoring up my reputation, which involved not just explaining why I could not have done what I was accused of doing, but explaining why person x would accuse me of doing it, knowing I would have to contend with ‘where there’s smoke there’s fire’ views that could be put, even if nobody actually put them.

So because we’re concerned, as highly socialised creatures, with our reputations, we engage in a lot of post-hoc reasoning, which is not quite to say post-hoc rationalisation, which we tend to think of as making excuses after the fact (something we do a lot of as well). A major point that Sperber and Mercier are keen to emphasise is that we largely negotiate our way through life via pretty reliable unconscious inferences and intuitions, built up over years of experience, which we only give thought to when they’re challenged or when they fail us in some way. But of course there’s much more to their ‘new theory of human understanding’ than this. In any case much of what the book has to say makes very good sense to me, and I’ll explore this further in future posts.

Written by stewart henderson

January 20, 2020 at 2:05 pm

inference in the development of reason, and a look at intuition

leave a comment »

various more or less feeble attempts to capture intuition 

Many years ago I spent quite a bit of time getting my head around formal logic, filling scads of paper with symbols whose meanings I’ve long since forgotten, obviously through disuse.
I recognise that logic has its uses, tied with mathematics, e.g. in developing algorithms in the field of information technology, inter alia, but I can’t honestly see its use in everyday life, at least not in my own. Yet logic is generally valued as the sine qua non of proper reasoning, as far as I can see.
Again, though, in the ever-expanding and increasingly effective field of cognitive psychology, reason and reasoning as concepts are undergoing massive and valuable re-evaluation. As Hugo Mercier and Dan Sperber argue in The enigma of reason, they have benefitted (always arguably) from being taken out of the hands of logicians and (most) philosophers and examined from an evolutionary and psychological perspective. Charles Darwin read Hume on inference and reasoning and commented in his diary that scientists should consider reason as gradually developed, that’s to say as an evolved trait. So reasoning capacities should be found in other complex social mammals to varying degrees.    

An argument has been put forward that intuition is a process that fits between inference and reason, or that it represents a kind of middle ground between unconscious inference and conscious reasoning. Daniel Kahneman, for example, has postulated three cognitive systems – perception, intuition (system 1 cognition) and reasoning (system 2). Intuition, according to this hypothesis, is the ‘fast’, experience based, rule-of-thumb type of thinking that often gets us into trouble, requiring the slower ‘think again’ evaluation (which is also far from perfect) to come to the rescue. However, Mercier and Sperber argue that intuition is a vague term, defined more by what it lacks than by any defining characteristics. It appears to be a slightly more conscious process of acting or thinking by means of a set of inferences. To use a personal example, I’ve done a lot of cooking over the years, and might reasonably describe myself as an intuitive cook – I know from experience how much of this or that spice to add, how to reduce a sauce, how to create something palatable with limited ingredients and so forth. But this isn’t the product of some kind of intuitive mechanism, rather it’s the product of a set of inferences drawn from trial-and-error experience that is more or less reliable. Mercier and Sperber describe this sense of intuitiveness as a kind of metacognition, or ‘cognition about cognition’, in which we ‘intuit’ that doing this, or thinking that, is ‘about right’, as when we feel or intuit that someone is in a bad mood, or that we left our keys in room x rather than room y. This feeling lies somewhere between consciousness and unconsciousness, and each intuition might vary considerably on that spectrum, and in terms of strength and weakness. Such intuitions are certainly different from perceptions, in that they are feelings we have about something. That is, they belong to us. Perceptions, on the other hand, are largely imposed on us by the world and by our evolved receptivity to its stimuli.

All of this is intended to take us, or maybe just me, on the path towards a greater understanding of conscious reasoning. There’s a long way to go…

References

The enigma of reason, a new theory of human understanding, by Hugo Mercier and Dan Sperber, 2017

Thinking, fast and slow, by Daniel Kahneman, 2011

Written by stewart henderson

December 4, 2019 at 10:45 pm

On Massimo Pigliucci on scientism 2: brains r us

with 2 comments

neuroethics is coming…

In his Point of Inquiry interview, Pigliucci mentions Sam Harris’s book The Moral Landscape a couple of times. Harris seeks to make the argument, in that book, that we can establish, sometime in the future, a science of morality. That is, we can be factual about the good life and its opposite, and we can be scientific about the pathways, though there might be many, that lead towards the good life and away from the bad life. I’m in broad agreement about this, though for pragmatic reasons I would probably prefer the term ‘objective’ to ‘scientific’. Just because it doesn’t frighten the horses so much. As mentioned in my previous post, I don’t want to get hung up on terminology. Science obviously requires objectivity, but it doesn’t seem clear to everyone that morality requires objectivity too. I think that it does (as did, I presume, the authors of the Universal Declaration of Human Rights), and I think Harris argues cogently that it does, based on our well-being as a social species. But Pigliucci says this about Harris’s project:

When Sam Harris wrote his famous book The Moral Landscape, the subtitle was ‘How science can solve moral questions’ – something like that. Well that’s a startling question if you think about it because – holy crap! So I would assume that a typical reader would buy that book and imagine that now he’s going to get answers to moral questions such as whether abortion is permissible and in what circumstances, or the death penalty or something… And get them from say physics or chemistry, maybe neuroscience, since Harris has a degree in neuroscience..

Pigliucci makes some strange assumptions about the ‘typical reader’ here. Maybe I’m a long way from being a ‘typical reader’ (don’t we all want to think that?) but, to me the subtitle (which is actually ‘How science can determine human values’) suggests, again, methodology. By what methods, or by what means, can human value – that’s to say what is most valuable to human well-being – be determined. I would certainly not have expected, reading the actual sub-title, and considering the main title of the book, answers to specific moral questions. And I certainly wouldn’t expect answers to those questions to come from physics or chemistry. Pigliucci just mentions those disciplines to make Harris’s views seem more outrageous. That’s not good faith arguing. Neuroscience, however, is closer to the mark. Our brains r us, and if we want to know why a particular mammal behaves ‘badly’, or with puzzling altruism, studying the animal’s brain might be one among many places to start. And yet Pigliucci makes this statement later on re ‘scientistic’ scientists

It seems to me that the fundamental springboard for all this is a combination of hubris, the conviction that what they do is the most important thing – in the case of Sam Harris for instance, it turns out at the end of the book [The Moral Landscape] it’s not just science that gives you the answers, it’s neuroscience that gives you the answers. Well, surprise surprise, he’s a neuroscientist.

This just seems silly to me. Morality is about our thoughts and actions, which start with brain processes. Our cultural practices affect our neural processes from our birth, and even before our conception, given the cultural attitudes and behaviours of our future parents. It’s very likely that Harris completed his PhD in cognitive neuroscience because of his interest in human behaviour and its ethical consequences (Harris is of course known for his critique of religion, but there seems no doubt that his greatest concerns about religious belief are at base concerns about ethics). Yet according to Pigliucci, had Harris been a physicist he would have written a book on morality in terms of electromagnetic waves or quantum electrodynamics. And of course Pigliucci doesn’t examine Harris’s reasoning as to why he thinks science, and most particularly neuroscience and related disciplines, can determine human values. He appears to simply dismiss the whole project as hubristic and wrong-headed.

I know that I’m being a little harsh in critiquing Pigliucci based on a 20-minute interview, but there doesn’t seem any attempt, at least here, to explain why certain topics are or should be off-limits to science, except to infer that it’s obvious. Does he feel, for example, that religious belief should be off-limits to scientific analysis? If so, what do reflective non-religious people do with their puzzlement and wonder about such beliefs? And if it’s worth trying to get to the bottom of what cultural and psychological conditions bring about the neurological networking that disposes people to believe in a loving or vengeful omnipotent creator-being, it’s also worth trying to get to the bottom of other mind-sets that dispose people to behave in ways productive or counter-productive to their well-being. And the reason we’re interested isn’t just curiosity, for the point isn’t just to understand our human world, but to improve it.

Finally Pigliucci seems to confuse a lack of interest, among such people in his orbit as Neil deGrasse Tyson and Lawrence Krauss, in philosophy, especially as it pertains to science, with scientism. They’re surely two different things. It isn’t ‘scientism’ for a scientist to eschew a particular branch of philosophy any more than it is for her to eschew a different field of science from her own, though it might seem sometimes a bit narrow-minded. Of course, as a non-scientist and self-professed dilettante I’m drawn to those with a wide range of scientific and other interests, but I certainly recognise the difficulty of getting your head around quantum mechanical, legal, neurological, biochemical and other terminology (I don’t like the word ‘jargon’), when your own ‘rabbit hole’ is so fascinating and enjoyably time-consuming.

There are, of course, examples of scientists claiming too much for the explanatory power of their own disciplines, and that’s always something to watch for, but overall I think the ‘scientism’ claim is more abused than otherwise – ‘weaponised’ is the trendy term for it. And I think Pigliucci needs to be a little more skeptical of his own views about the limits of science.

Written by stewart henderson

May 26, 2019 at 3:09 pm

the self and its brain: free will encore

leave a comment »


yeah, right

so long as, in certain regions, social asphyxia shall be possible – in other words, and from a yet more extended point of view, so long as ignorance and misery remain on earth, books like this cannot be useless.

Victor Hugo, author’s preface to Les Miserables

Listening to the Skeptics’ Guide podcast for the first time in a while, I was excited by the reporting on a discovery of great significance in North Dakota – a gigantic graveyard of prehistoric marine and other life forms precisely at the K-T boundary, some 3000 kms from where the asteroid struck. All indications are that the deaths of these creatures were instantaneous and synchronous, the first evidence of mass death at the K-T boundary. I felt I had to write about it, as a self-learning exercise if nothing else.

But then, as I listened to other reports and talking points in one of SGU’s most stimulating podcasts, I was hooked by something else, which I need to get out of the way first. It was a piece of research about the brain, or how people think about it, in particular when deciding court cases. When Steven Novella raised the ‘spectre’ of ‘my brain made me do it’ arguments, and the threat that this might pose to ‘free will’, I knew I had to respond, as this free will stuff keeps on bugging me. So the death of the dinosaurs will have to wait.

The more I’ve thought about this matter, the more I’ve wondered how people – including my earlier self – could imagine that ‘free will’ is compatible with a determinist universe (leaving aside quantum indeterminacy, which I don’t think is relevant to this issue). The best argument for this compatibility, or at least the one I used to use, is that, yes, every act we perform is determined, but the determining factors are so mind-bogglingly complex that it’s ‘as if’ we have free will, and besides, we’re ‘conscious’, we know what we’re doing, we watch ourselves deciding between one act and another, and so of course we could have done otherwise.

Yet I was never quite comfortable about this, and it was in fact the arguments of compatibilists like Dennett that made me think again. They tended to be very cavalier about ‘criminals’ who might try to get away with their crimes by using a determinist argument – not so much ‘my brain made me do it’ as ‘my background of disadvantage and violence made me do it’. Dennett and other philosophers struck me as irritatingly dismissive of this sort of argument, though their own arguments, which usually boiled down to ‘you can always choose to do otherwise’ seemed a little too pat to me. Dennett, I assumed, was, like most academics, a middle-class silver-spoon type who would never have any difficulty resisting, say, getting involved in an armed robbery, or even stealing sweets from the local deli. Others, many others, including many kids I grew up with, were not exactly of that ilk. And as Robert Sapolsky points out in his book Behave, and as the Dunedin longitudinal study tends very much to confirm, the socio-economic environment of our earliest years is largely, though of course not entirely, determinative.

Let’s just run though some of this. Class is real, and in a general sense it makes a big difference. To simplify, and to recall how ancient the differences are, I’ll just name two classes, the patricians and the plebs (or think upper/lower, over/under, haves/have-nots).

Various studies have shown that, by age five, the more plebby you are (on average):

  • the higher the basal glucocorticoid levels and/or the more reactive the glucocorticoid stress response
  • the thinner the frontal cortex and the lower its metabolism
  • the poorer the frontal function concerning working memory, emotion regulation , impulse control, and executive decision making.

All of this comes from Sapolsky, who cites all the research at the end of his book. I’ll do the same at the end of this post (which doesn’t mean I’ve analysed that research – I’m just a pleb after all. I’m happy to trust Sapolski). He goes on to say this:

moreover , to achieve equivalent frontal regulation, [plebeian] kids must activate more frontal cortex than do [patrician] kids. In addition, childhood poverty impairs maturation of the corpus collosum, a bundle of axonal fibres connecting the two hemispheres and integrating their function. This is so wrong foolishly pick a poor family to be born into, and by kindergarten, the odds of your succeeding at life’s marshmallow tests are already stacked against you.

Behave, pp195-6

Of course, this is just the sort of ‘social asphyxia’ Victor Hugo was at pains to highlight in his great work. You don’t need to be a neurologist to realise all this, but the research helps to hammer it home.

These class differences are also reflected in parenting styles (and of course I’m always talking in general terms here). Pleb parents and ‘developing world’ parents are more concerned to keep their kids alive and protected from the world, while patrician and ‘developed world’ kids are encouraged to explore. The patrician parent is more a teacher and facilitator, the plebeian parent is more like a prison guard. Sapolsky cites research into parenting styles in ‘three tribes’: wealthy and privileged; poorish but honest (blue collar); poor and crime-ridden. The poor neighbourhood’s parents emphasised ‘hard defensive individualism’ – don’t let anyone push you around, be tough. Parenting was authoritarian, as was also the case in the blue-collar neighbourhood, though the style there was characterised as ‘hard offensive individualism’ – you can get ahead if you work hard enough, maybe even graduate into the middle class. Respect for family authority was pushed in both these neighbourhoods. I don’t think I need to elaborate too much on what the patrician parenting (soft individualism) was like – more choice, more stimulation, better health. And of course, ‘real life’ people don’t fit neatly into these categories, there are an infinity of variants, but they’re all determining.

And here’s another quote from Sapolsky on research into gene/environment interactions.

Heritability of various aspects of cognitive development is very high (e.g. around 70% for IQ) in kids from [patrician] families but is only around 10% in [plebeian] kids. Thus patrician-ness allows the full range of genetic influences on cognition to flourish, whereas plebeian settings restrict them. In other words, genes are nearly irrelevant to cognitive development if you’re growing up in awful poverty – poverty’s adverse affects trump the genetics.

Behave, p249

Another example of the huge impact of environment/class, too often underplayed by ivory tower philosophers and the silver-spoon judiciary.

Sapolsky makes some interesting points, always research-based of course, about the broader environment we inhabit. Is the country we live in more communal or more individualistic? Is there high or low income inequality? Generally, cultures with high income inequality have less ‘social capital’, meaning levels of trust, reciprocity and cooperation. Such cultures/countries generally vote less often and join fewer clubs and mutual societies. Research into game-playing, a beloved tool of psychological research, shows that individuals from high inequality/low social capital countries show high levels of bullying and of anti-social punishment (punishing ‘overly’ generous players because they make other players look bad) during economic games. They tend, in fact, to punish the too-generous more than they punish actual cheaters (think Trump).

So the determining factors into who we are and why we make the decisions we do range from the genetic and hormonal to the broadly cultural. A couple have two kids. One just happens to be conventionally good-looking, the other not so much. Many aspects of their lives will be profoundly affected by this simple difference. One screams and cries almost every night for her first twelve months or so, for some reason (and there are reasons), the other is relatively placid over the same period. Again, whatever caused this difference will likely profoundly affect their life trajectories. I could go on ad nauseam about these ‘little’ differences and their lifelong effects, as well as the greater differences of culture, environment, social capital and the like. Our sense of consciousness gives us a feeling of control which is largely illusory.

It’s strange to me that Dr Novella seems troubled by ‘my brain made me do it’, arguments, because in a sense that is the correct, if trivial, argument to ‘justify’ all our actions. Our brains ‘make us’ walk, talk, eat, think and breathe. Brains R Us. And not even brains – octopuses are newly-recognised as problem-solvers and tool-users without even having brains in the usual sense – they have more of a decentralised nervous system, with nine mini-brains somehow co-ordinating when needed. So ‘my brain made me do it’ essentially means ‘I made me do it’, which takes us nowhere. What makes us do things are the factors shaping our brain processes, and they have nothing to do with ‘free will’, this strange, inexplicable phenomenon which supposedly lies outside these complex but powerfully determining factors but is compatible with it. To say that we can do otherwise is just saying – it’s not a proof of anything.

To be fair to Steve Novella and his band of rogues, they accept that this is an enormously complex issue, regarding individual responsibility, crime and punishment, culpability and the like. That’s why the free will issue isn’t just a philosophical game we’re playing. And lack of free will shouldn’t by any means be confused with fatalism. We can change or mitigate the factors that make us who we are in a huge variety of ways. More understanding of the factors that bring out the best in us, and fostering those factors, is what is urgently required.

just thought I’d chuck this in

Research articles and reading

Behave, Robert Sapolsky, Bodley Head, 2017

These are just a taster of the research articles and references used by Sapolsky re the above.

C Heim et al, ‘Pituitary-adrenal and autonomic responses to stress in women after sexual and physical abuse in childhood’

R J Lee et al ‘CSF corticotrophin-releasing factor in personality disorder: relationship with self-reported parental care’

P McGowan et al, ‘Epigenetic regulation of the glucocorticoid receptor in human brain associates with childhood abuse’

L Carpenter et al, ‘Cerebrospinal fluid corticotropin-releasing factor and perceived early life stress in depressed patients and healthy control subjects’

S Lupien et al, ‘Effects of stress throughout the lifespan on the brain, behaviour and cognition’

A Kusserow, ‘De-homogenising American individualism: socialising hard and soft individualism in Manhattan and Queens’

C Kobayashi et al ‘Cultural and linguistic influence on neural bases of ‘theory of mind”

S Kitayama & A Uskul, ‘Culture, mind and the brain: current evidence and future directions’.

etc etc etc

Written by stewart henderson

April 23, 2019 at 10:53 am