a bonobo humanity?

‘Rise above yourself and grasp the world’ Archimedes – attribution

Archive for the ‘research’ Category

more oxytocin fantasies: an interminable conversation 3

leave a comment »

not sure if this measures a significant difference

 

Canto: So, as it turns out, the bonobo-oxytocin connection is all the rage on the internet. I mean, there are at least two articles on it. Here’s a quote from a PubMed article called ‘Divergent effects of oxytocin on eye contact in bonobos and chimpanzees’:

Previous studies have shown that bonobos and chimpanzees, humans’ two closest relatives, demonstrate considerable behavioral differences, including that bonobos look more at others’ eyes than chimpanzees. Oxytocin is known to increase attention to another’s eyes in many mammalian species (e.g. dogs, monkeys, and humans), yet this effect has not been tested in any nonhuman great ape species.

Jacinta: Hmm, so how do they know this? Presumably they’ve dosed subjects with oxytocin and measured their eye contact against controls?

Canto: No no, they know that bonobos have more eye contact than chimps, simply from observation. So they might infer from this that bonobos produce more oxytocin naturally than chimps…

Jacinta: So do women produce more oxytocin than men I wonder? I presume women make more eye contact than men.

Canto: Well in this study they dosed both bonobos and chimps with oxytocin, and the effect – more eye contact – was greater in bonobos than chimps. In fact, chimps even tended to avoid eye contact when shown images of conspecifics.

Jacinta: So, it’s a matter of interplay between this hormone/neurotransmitter and social conditioning?

Canto: Maybe, but you’d think that an increase in this supposedly touchy-feely hormone would act against social conditioning. Isn’t this the point of that drug, ecstacy? That it reduces social inhibitions…  But presumably nothing is ever so simple. Being poor, I only have access to the abstract of this paper, but another abstract, which looks at the effects of oxytocin and vasopressin on chimps, describes them as neuropeptides, just to confuse matters. The abstract also refers to about a dozen brain regions, as well as specific oxytocin and vasopressin receptors, so it gets pretty complicated.

Jacinta: Okay, vasopressin… from Wikipedia:

Human vasopressin, also called antidiuretic hormone (ADH), arginine vasopressin (AVP), or argipressin, is a hormone synthesised from the AVP gene as a peptide prohormone in neurons in the hypothalamus, and is converted to AVP. It then travels down the axon terminating in the posterior pituitary, and is released from vesicles into the circulation in response to extracellular hypertonicity (hyperosmolality). AVP has two major functions… etc etc

Canto: Okay thanks for that, let’s stick with oxytocin for now. It’s produced in the hypothalamus, a smallish region buried deep within the brain, just below the larger thalamus and above the even smaller amygdala. It releases and manages a variety of hormones. Brain signals are sent to the hypothalamus, exciting it to release oxytocin and other hormones, which are secreted into the bloodstream by the posterior pituitary gland….

Jacinta: Can you tell me what oxytocin is actually made of? Its structure? The term ‘hormone’ is just a black box to me.

Canto: Okay, here’s a diagram of oxytocin to try and make sense of:

It’s a polypeptide. A peptide is basically an amino acid chain. FYI:

An amino acid is an organic molecule that is made up of a basic amino group (−NH2), an acidic carboxyl group (−COOH), and an organic R group (or side chain) that is unique to each amino acid. The term amino acid is short for α-amino [alpha-amino] carboxylic acid.

Jacinta: So these are coded for, ultimately, by genes?

Canto: Yes, we’re heading backwards here, but each amino acid is encoded by a sequence of three of the four base pairs in our DNA. Anyway oxytocin, among other things is sometimes given to women while in labour. It helps with the contractions apparently. I’ve also heard that the recreational drug ‘ecstasy’, or MDMA, works essentially by releasing oxytocin.

Jacinta: It just so happens I’ve found an interesting 2014 paper published in Neuropsychopharmacology, my new favourite journal, called ‘Effects of MDMA and Intranasal Oxytocin on Social and Emotional Processing’, and here’s a quote from the abstract:

Oxytocin produced small but significant increases in feelings of sociability and enhanced recognition of sad facial expressions. Additionally, responses to oxytocin were related to responses to MDMA with subjects on two subjective measures of sociability. Thus, MDMA increased euphoria and feelings of sociability, perhaps by reducing sensitivity to subtle signs of negative emotions in others. The present findings provide only limited support for the idea that oxytocin produces the prosocial effects of MDMA.

Canto: That is interesting. If that finding can be replicated, I’d say forget the MDMA, dose people with oxytocin. A small but significant increase in feelings of sociability might just be enough to transform our human world.

Jacinta: Hmmm. Small but significant – that sounds a mite contradictory.

Canto: Not the same as significantly small. That slightly significant dose, administered to Messrs Pudding and Pingpong and their enablers, might’ve saved the lives of many Ukrainians, Uyghurs and advocates of multiculturalism, democracy, feminism and other wild and woolly notions. And it doesn’t really transform characters, it just softens their edges.

Jacinta: Yes it’s a nice fantasy – more productive than butchering the butchers, a fantasy I occasionally indulge in. But not workable really.

Canto: Why not? We dosed petrol with lead, and look at how that worked out. It certainly had an effect. In Japan they still use radium baths (at very low levels) for health purposes, even claiming it as a cure for cancer. I’m not sure if oxytocin baths can ever be a thing, but if so I’m sure there will be early adopters.

Jacinta: Well, it’s good to think positively. Oxytocin is often thought of as a bonding hormone between mother and child. The key would be to ensure it facilitates a more general bonding: to cause Mr Pingpong, for example, to see Uyghur, Tibetan, Yi, Limi, and all the other non-Han ethnicities in China as his sisters – or lovers even, revolting as that would be to those peoples.

Canto: Better than being their oppressors and exterminators.

Jacinta: Slightly. But I wonder, quite seriously, if, assuming such a dose of bonding could be effectuated, we could still function as the sometimes rational, problem-solving, highly creative species we indubitably are. Would there be a price to pay for all that oxytocin? And how would this affect all those other hormones and neurotransmitters and all their myriad effects? Humans are notorious for causing extra problems with their solutions, e.g lead, DDT, etc etc.

Canto: Well, there’s no need to worry about the fallout from this solution as yet. I just googled Putin and oxytocin together and came up empty. Obviously we’re way ahead of the curve.

Jacinta: Haha, it’s not a curve these days, it’s a pivot. Get with the program!

References

https://pubmed.ncbi.nlm.nih.gov/33388536/

https://www.yourhormones.info/hormones/oxytocin/

https://www.acs.org/content/acs/en/molecule-of-the-week/archive/o/oxytocin.html

https://www.britannica.com/science/amino-acid

https://www.wsj.com/articles/BL-JRTB-11551

 

Written by stewart henderson

August 4, 2022 at 10:38 pm

21 – dolphins, bonobos, sex and pleasure

leave a comment »

bonobos at Jacksonville zoo

I enjoyed a little boat trip off the north-east coast of Kangaroo Island recently. The owner, our guide, bounced us up and down the shoreline east of Christmas Cove to view caves in the limestone cliffs, seabirds such as wedge-tailed eagles on the cliff-tops, and above all to search for a pod of dolphins known to be using the area as a daytime resting-place.

After a few bouts of bouncing eastward and westward we were becoming skeptical, though by no means annoyed. A year before, the island, Australia’s third largest after Tasmania and Melville Island, had been ravaged by bushfires, devastating vegetation and wildlife, and seriously damaging the island’s fragile economy, not to say ecology, and we were happy to make our tiny contribution without great expectations of sighting fabulous beasties. 

So we were delighted, on heading eastward again, to spot a few fins bobbing and dipping in the water ahead. Slowing toward them, we were told there were about 25 dolphins in this pod (the term was first used by whalers in the early nineteenth century, for reasons unknown). I soon gave up trying to count them as identical-looking fins appeared and disappeared and vaguely discerned bodies twisted and turned just below the surface. They seemed to form pairs now and then, breaking the surface sleekly and synchronously in elegant arcs. Dolphins, I learned, spend their days lolling about near the shore in these pods after a night of hunting out at sea. They seemed aware but unconcerned about our presence, and at one time the whole group disappeared then reappeared on the other side of our boat, bobbing and slow-twirling as before. 

I was struck by a remark by our guide that dolphins are one of the few mammals that mate for fun or pleasure. Of course I made an immediate connection with bonobos, but then I wondered, what does the verb, to mate, exactly mean? We humans never describe ourselves as mating, that’s for the birds, etc. We fuck, screw, bonk, shag, hump and bone, we more coyly sleep together, and more romantically make love (not allowed for other species), but we’re way above mating.

‘Mating’ brings up two internet definitions, the action of animals coming together to breed, and copulation. So dolphins, and bonobos and humans, often come together to breed – but actually not to breed. As for copulation, that’s rarely used for humans, just as fornication is rarely used for non-humans. The latter is, of course, a term of mostly religious disapproval, and non-humans are too lowly to be worthy of moral judgment. 

Of course we do apply mating to humans with a pinch of irony, as in the mating game, and this blurs the line between humans and others, but not enough for me. The point is that dolphins and bonobos use sex, which may not be the full rumpy-pumpy (dolphins don’t even have rumps to speak of), to bond with each other, to ease tension, to have fun, as our guide said. But then, don’t all species have sex purely for pleasure, or at least because driven to do so, by sensation? Do cats, dogs, birds and flies have sex with the intention of reproducing? I don’t think so. 

Human sex is pleasurable, so I’ve heard, and I expect bonobo sex is too. Fly sex probably not, or so I thought, but I’m probably wrong. Researchers have found that male fruit flies enjoy ejaculating, and tend to consume alcohol when denied sex. I know exactly how they feel. Anyway, fruit flies have long been favourites for biological research, and more recently they’ve found that ‘a protein present in the ejaculate of male fruit flies activates long-term memory formation in the brains of their female partners’. It rather makes me wonder what effect this kind of research has on the researchers themselves, but I’m sure it’s all for the best. 

One thing is certain, cats and dogs, and I’ve had a few, feel pleasure. Cats are appallingly sensual, and I’ve probably had more sexual advances from dogs than from humans, though whether they involved pleasure I can’t be sure. Generally our understanding of non-human sex has expanded in recent decades, as our sense of our specialness in everything has receded. It’s also true that we’ve tended to look at other species with a scientific instrumentalism, that’s to say from the viewpoint of evolution, breeding, genetics and other forms of categorisation, rather from an emotional or sensory viewpoint.

When I was very young I read a book by Ernest Thompson Seton called The biography of a grizzly. This story of Wahb, a male grizzly whose family was wiped out by hunters, and who survived to become the most powerful bear in the region, before inevitable decline and death, had an unforgettable emotional impact. I’m glad I read it though, as, sentimentalised though it might’ve been, it inoculated me against the scientific tendency, now changing, to see any animal as an it, rather than he or she or dad or mum or brother or sister. So this idea of putting oneself in the paws of a grizzly or the feet of a bonobo has long been perfectly legitimate to me. 

In 2014 Jason Goldman wrote an article entitled Do animals have sex for pleasure?, in which he cited many instances of other species – bonobos of course heading the list – engaging in oral and penetrative sex ‘out of season’, when pregnancy is precluded. They include capuchin monkeys, macaques, spotted hyenas, bears, lions and fruit bats. It stands to reason that the physiological, whole-of body pleasure we derive from sex is shared by other species, and is indulged by them, and this includes what we call homosex, and masturbation. Australia’s premier science magazine, Cosmos, claimed a few years ago that some 6000 species (or was it 600?) have been observed engaging in homosexual activity, which does sound funny when talking about what we would habitually call lower life forms. 

All of these findings have had the effect, and perhaps the intention, of loosening our uptight attitudes toward sex, as well as upending our notions of human specialness. But the behaviour of bonobos, who at times look strikingly like us, is more immediately impactful than anything fruit flies or fruit bats might do. Just the other day I watched a video of bonobos in Jacksonville zoo, Florida. Two of them were lying on the ground close together, and kissing each other, on the lips, again and again. Were they male? female? one of each? Who knows, it was so beautiful to watch.  

References

Ernest Thompson Seton, The biography of a grizzly, 1900. 

https://www.the-scientist.com/news-opinion/male-fruit-flies-take-pleasure-in-having-sex-30867

https://www.the-scientist.com/news-opinion/sex-promotes-lasting-memories-in-female-flies-66763

Bonobos at Jacksonville Zoo (video)

 

Written by stewart henderson

January 10, 2021 at 1:31 pm

interactional reasoning and confirmation bias – introductory

with one comment

I first learned about confirmation bias, and motivated reasoning, through my involvement with skeptical movements and through the Skeptics’ Guide to the Universe (SGU) podcast. As has been pointed out by the SGU and elsewhere, this confirmation bias, this strong tendency to acknowledge and support views, about any topic, that confirm our own, and to dismiss or avoid listening to views from the opposite side, is a feature of liberal and conservative thought in equal measure, as well as being as much a feature of highly credentialed public intellectuals’ thought as it is for the thinking of your average unlearned sot. The problem of confirmation bias, this ‘problem in our heads’, has been blamed for the current social media maladies we supposedly suffer from, creating increasingly partisan echo-chambers in which we allow ourselves, or are ‘driven by clicks’, to be shut off from opposing views and arguments.

But is confirmation bias quite the bogey it’s generally claimed to be? Is it possibly an evolved feature of our reasoning? This raises fundamental questions about the very nature of what we call reason, and how and why it evolved in the first place. Obviously I’m not going to be able to deal with this Big Issue in the space of the short blog pieces I’ve been writing recently, so it’ll be covered by a number of posts. And, just as obviously, my questioning of confirmation bias hasn’t sprung from my own somewhat limited genius – it pains me to admit – but from some current reading material.

The enigma of reason: a new theory of human understanding, by research psychologists Hugo Mercier and Dan Sperber, is a roolly important and timely piece of work, IMHO. So important that I launch into any attempt to summarise it with much trepidation. Anyway, their argument is that reasoning is largely an interactive tool, and evolved as such. They contrast the interactive view of reason with the ‘intellectualist’ view, which begins with Aristotle and his monumentally influential work on logic and logical fallacies. So with that in mind, they tackle the issue of confirmation bias in chapter 11 of their book, entitled ‘Why is reason biased?’

The authors begin the chapter with a cautionary tale, of sorts. Linus Pauling, winner of two Nobel Prizes and regarded by his peers as perhaps the most brilliant biochemist of the 20th century, became notoriously obsessed with the healing powers of vitamin C, in spite of mounting evidence to the contrary, raising the question as to how such a brilliant mind could get it so wrong. And perhaps a more important question – if such a mind could be capable of such bias, what hope is there for the rest of us?

So the authors look more closely at why bias occurs. Often it’s a matter of ‘cutting costs’, that is, the processing costs of cognition. An example is the use of the ‘availability heuristic’, which Daniel Kahneman writes about in Thinking fast and slow, where he also describes it as WYSIWTI (what you see is what there is). If, because you work in a hospital, you see many victims of road accidents, you’re liable to over-estimate the number of road accidents that occur in general. Or, because most of your friends hold x political views, you’ll be biased towards thinking that more people hold x political views than is actually the case. It’s a kind of fast and lazy form of inferential thinking, though not always entirely unreliable. Heuristics in general are described as ‘fast and frugal’ ways of thinking, which save a lot in cognitive cost while losing a little in reliability. In fact, as research has shown (apparently) sometimes heuristics can be more reliable than pains-taking, time-consuming analysis of a problem.

One piece of research illustrative of fast-and-frugal cognitive mechanisms involves bumble-bees and their strategies to avoid predators (I won’t give the details here). Why not? Reasoning as an evolved mechanism is surely directed first and foremost at our individual survival. To be more preservative than right. It follows that some such mechanism, whether we call it reasoning or not, exists in more or less complex form in more or less complex organisms. It also follows from this reasoning-for-survival outlook, that we pay far more attention to something surprising that crops up in our environment than routine stuff. As the authors point out:

Even one-year-old babies expect others to share their surprise. When they see something surprising, they point toward it to share their surprise with nearby adults. And they keep pointing until they obtain the proper reaction or are discouraged by the adults’ lack of reactivity.

Mercier & Sperber, The enigma of reason, p210

Needless to say, the adults’ reactions in such an everyday situation are crucial for the child – she learns that what surprised her is perhaps not so surprising, or is pleasantly surprising, or is dangerous, etc. All of this helps us in fast-and-frugal thinking from the very start.

Surprises – events and information that violates our expectations – are always worth paying attention to, in everyday life, for our survival, but also in our pursuit of accurate knowledge of the world, aka science. More about that, and confirmation bias, in the next post.

Reference

The enigma of reason: a new theory of human understanding, by Hugo Mercier & Dan Sperber, 2017

Written by stewart henderson

January 28, 2020 at 2:13 pm

The statin controversy

leave a comment »

Never edit your own writing! Brian J Ford.

one thing thing you can be sure of – this claim (posted by a British chiropractor) is meaningless bullshit

I read Ben Goldacre’s quite demanding book Bad pharma some years ago, and that’s where I learned about statins, but I don’t recall much. I do recall that, not long after I read the book, I was at a skeptics meet-up when Dr Goldacre’s name came up. The man next to me started literally spitting chips at the mention – he was eating a massive bowl of chips and was grossly overweight (not that I’m assuming anything from this – just saying, haha). He roolly didn’t like Dr Goldacre. What went through my head was – some people may be really invested in having a magic pill that allows them to live forever and a day no matter what their diet or lifestyle.

I’ve just discovered that Goldacre has a new book out, entirely on this topic, which I intend to read, but my current decision to explore the issue is based on listening to Dr Maryanne Demasi’s talk, ‘statin wars – have we been misled by the evidence?’, available on YouTube. I very much recall the massive Catalyst controversy a few years ago, when a two-part special they did on statins led finally to the demise of the program. Without knowing any details, I thought this was a bit OTT, but when I heard Dr Norman Swann, a valued health professional and presenter of the ABC’s Health report, railing about the irresponsibility of the statin special, I frankly didn’t know what to think.

So statins are lipid-lowering medications that come in various flavours, including atorvastatin, fluvastatin, lovastatin and rosuvastatin. Lipitor, a brand name for atorvastatin manufactured by Pfizer, is the most profitable drug in the history of medicine. I’ve never taken statins myself, and I’m starting this piece as a more or less total beginner on the topic. I’ve read the Wikipedia entry on statins, which is quite comprehensive, with a very long reference list. Of course it’s not entirely comprehensible to a lay person, but that’s not a criticism – immunobiology and related research fields are complex. It’s also clearly pro-statin. It includes this interesting sentence:

 A systematic review co-authored by Ben Goldacre concluded that only a small fraction of side effects reported by people on statins are actually attributable to the statin.[63]

It’s interesting that Goldacre, and nobody else, is mentioned here as a co-author. It makes me wonder…

My only quibble, as a lay person, is that the positive effects of these statins, and their relatively few side-effects, seems almost too good to be true. I speak, admittedly, as a person who’s always been ultra-skeptical of ‘magic bullets’.

Which brings me to issues raised in Dr Demasi’s talk, and not addressed in the Wikipedia article. They include the idea, promoted by an ‘influential group’, that statin use should be prescribed for everyone over 50, regardless of cholesterol levels. Children with high cholesterol levels are being screened for statin use and Pfizer has apparently designed fruit-flavoured statis for use by children and adolescents. Others have suggested using statins as condiments in fast-food burgers, and even adding statins to the public water supply. It’s easy to see how such ‘innovations’ involve making scads of money, but this isn’t to deny that statins are effective in many if not most instances, and we should undoubtedly celebrate the work of the Japanese biochemist Akiro Endo, who pioneered the work on enzyme inhibitors that led to the discovery of mevastatin, produced by the fungus Penicillium citrinum.

But Demasi made some other interesting points, firstly about how drug companies like Pfizer might seek to maximise their profits. One obvious way is to widen the market – for example by lobbying for a lowering of the standard level of cholesterol in the blood considered dangerous. From the early 2000s in the US, ‘high cholesterol’ was officially shifted down from as high as 6.5 down to below 5, moving vast numbers of people onto having a ‘need’ for these cholesterol-lowering drugs. Demasi points out that this lowering wasn’t based on any new science, and that the body responsible for these decisions, the National Cholesterol Education Program (NCEP), was loaded with people with financial ties to the statin industry. To be fair, though, one might expect that doctors and specialists concerned with cholesterol to be invested, financially or otherwise, in ways of lowering it. They might also have felt, for purely scientific reasons, that the level of cholesterol considered dangerous was long overdue for adjustment.

Another change occurred in 2013 when two major heart health associations in the US decided to abandon a single number in terms of risk factors for heart disease/failure. Instead they looked at cholesterol, blood pressure, weight, diabetes and other factors to calculate ‘percentage risk’ of cardiovascular problems. They evaluated this risk so that if it was over 7.5% in the next 10 years, you should be prescribed a statin. A similar percentage risk system was used in the UK, but the statin prescription started at 20%. Why the huge discrepancy? Six months later, the Brits brought their threshold down to 10%. The US change brought almost 13 million people, mostly elderly, onto the radar for immediate statin prescription. The method of calculation in the US was independently analysed, and it was found that they over-estimated the risk, sometimes by over 100%. Erring on the side of caution? Or was there a lot of self-interest involved? It could fairly be a combination. The term for all this is ‘statinisation’, apparently. It’s attributed to John Ioannidis, a Stanford professor of medicine and a noted ‘scourge of sloppy science’. If you look up statinisation, you’ll find a storm of online articles of varying quality and temper on the issue – though most, I notice, are five years old or more. I’m not sure what that signifies, but I will say that, while we’ll always get the anti-science crowd baying against big pharma, vaccinations and GM poison, there’s a clear issue here about vested interests, and the need to, as Demasi says, ‘follow the money’.

This brings up the issue of how trials of these drugs are conducted, who pays for them, and who reviews them. According to Demasi, the vast majority of statin trials are funded by manufacturers. Clearly this is a vested interest, so trial results would need to be independently verified. But, again according to Demasi (and others such as Ioannidis and Peter Gotzsche, founder of the nordic Cochrane Collaboration) this is not happening, and ‘the raw data on statin side-effects has never been released to the public’ (Demasi, 2018). This data is held by the Cholesterol Treatment Triallists’ (CTT) collaboration, under the Clinical Trial Service Unit (CTSU) at Oxford Uni. According to Demasi, who takes a dim view of the CTT collaboration, they regularly release meta-analyses of data on statins which advocate for a widening of their use, and they’ve signed agreements with drug companies to prevent independent examination of research findings. All of this is described as egregious, which might seem fair enough, but Elizabeth Finkel, in a long-form article for Cosmos magazine in December 2014, takes a different view:

.. [the CTT] are a collaboration of academics and they do have access to the raw data. It is true that they do not share that data outside their collaboration and are criticised by other researchers who would like to be able to check their calculations. But the trialists fear mischief, especially from drug companies seeking to discredit the data of their rivals or from other people with vested interests. Explains [Professor Anthony] Keech, “the problem with ad hoc analyses are that they can use methods to produce a particular result. The most reliable analyses are the ones done using the methods we published in 1995. The rules were set out before we started.” And he points out these analyses are cross-checked by the academic collaborators: “Everything is replicated.”

As a regular reader of Cosmos I’m familiar with Finkel’s writings and find her eminently reliable, which of course leaves me more nonplussed than ever. I’m particularly disturbed that anyone would seriously claim that everyone over fifty (and will it be over forty in the future?) should be on these medications. I’m 63 and I take no medications at all, which I find a great relief, especially when I look at others my age who have mini-pharmacies in their homes. But then I’m one of those males who doesn’t visit doctors much and I have little idea about my cholesterol levels (well yes, they’ve been checked and doctors haven’t raised them to me as an issue). When you get examined, they usually find something wrong….

In her talk, Demasi made a comparison with the research on Tamiflu a few years ago, when Cochrane Collaboration researchers lobbied hard to be allowed to review trial data, and it was finally revealed, apparently, that it was certainly not as effective and side-effect free as its makers, Roche, claimed it to be. The jury is still out on Tamiflu, apparently. Whether it’s fair to compare the Tamiflu issue with the statin issue is a matter I can’t really adjudicate on, but if Finkel is to be believed, the CTT data is more solid.

There’s also an issue about more side effects being complained of by general users of statins – complaints made to their doctors – than side effects found in trials. This has already been referred to above, and is also described in Finkel’s article. Many of these complaints of side-effects haven’t been able to be sheeted home to statins, which suggests there’s possibly/probably a nocebo effect at play here. But Demasi suggests something more disturbing – that many subjects are eliminated from trials during a run-in period precisely because the drug disagrees with them, and so the trial proper begins only when many people suffering from side-effects are excluded. She also notes, I think effectively, that there is a lot of play with statistics in the advertising of statins (and other drugs of course) – for example a study which found that the risk of having a heart attack on statins was about 2% compared to 3% on placebos was being advertised as proving that your heart-attack risk on statins is reduced by a third. This appears to be dodgy – the absolute percentage difference is very small, and how is risk actually assessed? By the number of actual heart attacks over period x? I don’t know. And how many subjects were in the study? Were there other side-effects? But of course we shouldn’t judge the value of statins by advertising guff.

Another interesting attack on those expressing doubts about the mass prescription of statins has been to call them grossly irresponsible and even murderers. This seems strange to me. Of course doctors should be all about saving lives, but they should first of all be looking at prevention before cure as the best way of saving lives. Exercise (mental and physical) really is a great form of medicine, though of course not a cure-all, and diet comes second after exercise. Why the rush to medicalise? And none of the writers and clinicians supporting statins are willing to mention the financial bonanza accruing to their manufacturers and those who invest in them. Skepticism is the lifeblood of science, and the cheerleaders for statins should be willing to accept that.

Having said that, consider all the life-saving medications and procedures that have preceded statins, from antibiotics to vaccines to all the procedures that have made childbirth vastly safer for women – who cares now about the pharmaceutical and other companies and patentees who’ve made their fortunes from them? They’re surely more deserving of their wealth than the Donnie Trumps of the world.

So, that’s my initial foray into statins, and I’m sure the story has a way to go. In my next post I want to look at how statins work. I’ve read a couple of pieces on the subject, and they’ve made my head hurt, so in order to prevent Alzheimer’s I’m going to try an explanation in my own words – to teach myself. George Bernard Shaw wrote ‘those who can, do, those who can’t teach (it’s in Man and Superman). It’s one of those irritating memes, but I prefer the idea that people teach to learn, and learn to teach. That’s why I love teaching, and learning…

By the way, the quote at the top of this post seems irrelevant, but I keep meaning to begin my posts with quotes (it looks cool), so I’m starting now. To explain the quote – it was from a semi-rant by Ford in his introduction to the controversial dinosaur book Too big to walk (I’ve just started reading it), about writers not getting their work edited, peer reviewed and the like, and being proud or happy about this situation. This, he argues, helps account for all the rubbish on the net. It tickled me. I, of course, have no editor. It’s hard enough getting readers, let alone anyone willing to trawl through my dribblings for faults of fact or expression. Of course, I’m acutely aware of this, being at least as aware of my ignorance as Socrates, so I’ve tried to highlight my dilettantism and my indebtedness to others. I’m only here to learn. So Mr Ford, guilty as charged.

References

Dr Maryanne Demasi – Statin wars: Have we been misled by the evidence?

https://en.wikipedia.org/wiki/Statin

https://cosmosmagazine.com/society/will-statin-day-really-keep-doctor-away

https://en.wikipedia.org/wiki/John_Ioannidis

https://www.smithsonianmag.com/science-nature/what-is-the-nocebo-effect-5451823/

http://www.center4research.org/tamiflu-not-tamiflu/

Written by stewart henderson

September 9, 2019 at 9:44 pm

why do our pupils dilate when we’re thinking hard?

leave a comment »

Canto: So we’re reading Daniel Kahneman’s Thinking fast and slow, among other things, at the moment, and every page has stuff worth writing about and exploring further, it’s impossible to keep up.

Jacinta: Yes with this stuff it’s a case of reading slow and slower. Or writing about it faster and faster, unlikely in our case. A lot of it might be common knowledge, but not to us, though in these first fifty pages or so he’s getting into embodied cognition, which we’ve written about, but there’s new data here that I didn’t know about but which makes a lot of sense to me.

Canto: That’s because you’ve been primed to accept this stuff haha. But I want to focus here more narrowly on experiments Kahneman did early in his career with Jackson Beatty, who went on to become the leading figure in the study of ‘cognitive pupillometry’.

Jacinta: Presumably measuring pupils, which is easy enough, while measuring cognition or cognitive processes, no doubt a deal harder.

Canto: Kahneman tells the story of an article he read in Scientific American – a mag I regularly read in the eighties, so I felt all nostalgic reading this.

Jacinta: Why’d you stop reading it?

Canto: I don’t know – I had a hiatus, then I started reading New Scientist and Cosmos. I should get back to Scientific American. All three. Anyway, the article was by Eckhard Hess, whose wife noticed that his pupils dilated when he looked at lovely nature pictures. He started looking into the matter, and found that people are judged to be more attractive when their pupils are wider and that belladonna, which is used in cosmetics, also dilates the pupils. More importantly for Kahneman, he noted ‘the pupils are sensitive indicators of mental effort’. Kahneman was looking for a research project at the time, so he recruited Beatty to help him with some experiments.

Jacinta: And the result was that our pupils dilate very reliably, and quite significantly, when we’re faced with tough problem-solving tasks, like multiplying double-digit numbers – and they constrict again on completion, so reliably that the monitoring researcher can surprise the subject by saying ‘so you’ve got the answer now?’

Canto: Yes, the subjects were arranged so the researchers could view their eyes magnified on a screen. And of course this kind of research is easy enough to replicate, and has been. My question, though, is why does the pupil dilate in response to such an internal process as concentration? We think of pupils widening to let more light in at times of dim light, that makes intuitive sense, but – in order to seek a kind of metaphorical enlightenment? That’s fascinating.

Jacinta: Well I think you’re hitting on something there. Think of attention rather than concentration. I suspect that our pupils widen when we attend to something important or interesting. As Eckhard Hess’s wife noticed when he was looking at a beautiful scene. In the case of a mathematical or logical problem we’re attending to something intently as well, and the fact that it’s internal rather than external is not so essential. We’re looking at the problem, seeing the problem as we try to solve it.

Canto: Yes but again that’s a kind of metaphorical seeing, whereas your pupils don’t dilate metaphorically.

Jacinta: Yes but it’s likely that our pupils dilate in the dark only when we’re trying to see in the dark. Making that effort. When we turn off the light at night in our bedroom before going to sleep, it’s likely that our pupils don’t dilate, because we’re not trying to see the familiar objects around us, we just want to fall asleep. So even if we leave our eyes open for a brief period, they’re not actually trying to look at anything. It’s like when you enter a classroom and see a maths problem on the board. Your eyes won’t dilate just on noticing the problem, but only when you try to solve it.

Canto: I presume there’s been research on this – like with everything we ever think of. What I’ve found is that the ‘pupillary light reflex’ is described as part of the autonomous nervous system – an involuntary system, largely, which responds ‘autonomously’, unconsciously, to the amount of light it receives. But as you say, there are probably other over-riding features, coming from the brain rather than outside. However, a pupil ‘at rest’, in a darkened room, is usually much dilated. So dilation is by no means always to do with attention or focus.

Jacinta: Well there’s a distinction made in neurology between bottom-up and top-down processing, which you’ve just alluded to, in the sense that information coming from outside, and sensed on the skin, the eye and other sensory organs, is sent ‘up’ to the brain – the Higher Authority, – which then sends down responses, in this case to dilate or contract the pupil, all that is called bottom-up processing. But researchers have found that the pupil isn’t just regulated in a bottom-up way.

Canto: And that’s where cognitive pupillometry comes in.

Jacinta: And here are some interesting research findings regarding top-down influences on pupil size. When subjects were primed with pictures relating to the sun, even if they were’nt bright, their pupils contracted more than with pictures of the moon, even if those pictures were actually brighter than the sun pictures. And even words connected to brightness made their pupils contract. There’s also been solid research to back up the speculations of Eckhard Hess, that emotional scenes, images and memories, whether positive or negative, have a dilating effect on our pupils. For example, hearing the cute sound of a baby laughing, and the disturbing sound of a baby screaming, widens our pupils, while more neutral sounds of road traffic or workplace hubub have very little effect.

Canto: Because there’s nothing, or maybe too much info, to focus our attention, surely? While the foregrounded baby’s noises stimulate our sense of wonder, of ‘what’s happening?’ We’re moved to attend to it. Actually this reminds me of something apparently unrelated but maybe not. That’s the well-known problem that we’re moved to give to a charity when one suffering child is presented in an advertisement, and less and less as we’re faced with a greater and greater number of starving children. These numbers become like distant traffic, they disperse our attention and interest.

Jacinta: Yes well that’s a whole other story, but this brings us to the most interesting of findings re top-down effects on our pupils, and the question we’ve asked in the title. A more scientific name for thinking hard is increased cognitive load, and countless experiments have shown that increasing cognitive load, for example by solving tough maths problems, or committing stacks of info to memory, correlates with increased pupillary dilation. This hard thinking is done in the prefrontal cortex, but we won’t go into detail here about its more or less contested compartments. What I will say is there’s an obvious difference between thinking and memorising, and both of these activities increase cognitive load, and pupillary dilation. Some very interesting studies relating memorising and pupillary dilation have shown that children under a certain age, unsurprisingly, are less able to hold info in short-term memory than adults. The research task was to memorise a long sequence of numbers. Monitoring of pupil response showed that the children’s pupils would constrict from their dilated state after six numbers, unlike those of adults.

Canto: So, while we may not have a definitive answer to our title question – the why question – it seems to be that cognitive load, like any load that we carry, requires the expenditure of energy, which can be manifested in the tightening of muscles in the eye which dilates the pupils. This dilation reveals, apparently, that we’re attending to something or concentrating on something. I can see some real-world applications. Imagine, as a teacher, having a physics class, say. You could get your students to wear special glasses that monitor the dilation and constriction of their pupils – I’m sure such devices could be rigged up, and connected to a special console at the teacher’s desk, so he could see who in the class was paying close attention and who was off in dreamland…

Jacinta: Yeah right haha – even if that was physically possible, there are just a few privacy issues there, and how would you know if the pupillary dilation was due to the fascinating complexities of electromagnetism or the delightful profile of your student’s object of fantasy a couple of seats away? Or how could you know if their apparent concentration had anything much to do with comprehension? Or how would you know if their apparent lack of concentration was to do with disinterest or incomprehension or the fact they were way ahead of you in comprehension?

Canto: Details details. Small steps. One way of finding out all that is by asking them. At least such monitoring would give you some clues to go by. I look forward to this brave new transhumanising world….

References

Daniel Kahneman, Thinking fast and slow, 2012

https://kids.frontiersin.org/article/10.3389/frym.2019.00003

Torres A and Hout M (2019) Pupils: A Window Into the Mind. Front. Young Minds. 7:3. doi: 10.3389/frym.2019.00003

Written by stewart henderson

June 24, 2019 at 11:18 am

kin selection – some fascinating stuff

leave a comment »

meerkats get together for ye olde family snap

Canto: So we’ve done four blogs on Palestine and we’ve barely scratched the surface, but we’re having trouble going forward with that project because, frankly, it’s so depressing and anger-inducing that it’s affecting our well-being.

Jacinta: Yes, an undoubtedly selfish excuse, but we do plan to go on with that project – we’re definitely not abandoning it, and meanwhile we should recommend such books as Tears for Tarshiha by the Palestinian peace activist Olfat Mahmoud, and Goliath by the Jewish American journalist Max Blumenthal, which highlight the sufferings of Palestinian people in diaspora, and the major stresses of trying to exist under zionist monoculturalism. But for now, something completely different, we’re going to delve into the fascinating facts around kin selection, with thanks to Robert Sapolski’s landmark book Behave. 

Canto: The term ‘kin selection’ was first used by John Maynard Smith in the early sixties but it was first mooted by Darwin (who got it right about honey bees), and its mathematics were worked out back in the 1930s. 

Jacinta: What’s immediately interesting to me is that we humans tend to think we alone know who our kin are, especially our extended or most distant kin, because only we know about aunties, uncles and second and third cousins. We have language and writing and record-keeping, so we can keep track of those things as no other creatures can. But it’s our genes that are the key to kin selection, not our brains.

Canto: Yes, and let’s start with distinguishing between kin selection and group selection, which Sapolsky deals with well. Group selection, popularised in the sixties by the evolutionary biologist V C Wynne-Edwards and by the US TV program Wild Kingdom, which I remember well, was the view that individuals behaved, sometimes or often, for the good of the species rather than for themselves as individuals of that species. However, every case that seemed to illustrate group selection behaviour could easily be interpreted otherwise. Take the case of ‘eusocial’ insects such as ants and bees, where most individuals don’t reproduce. This was seen as a prime case of group selection, where individuals sacrifice themselves for the sake of the highly reproductive queen. However, as evolutionary biologists George Williams and W D Hamilton later showed, eusocial insects have a unique genetic system in which they are all more or less equally ‘kin’, so it’s really another form of kin selection. This eusociality exists in some mammals too, such as mole rats. 

Jacinta: The famous primatologist Sarah Hrdy dealt something of a death-blow to group selection in the seventies by observing that male langur monkeys in India commit infanticide with some regularity, and, more importantly, she worked out why. Langurs live in groups with one resident male to a bunch of females, with whom he makes babies. Meanwhile the other males tend to hang around in groups brooding instead of breeding, and infighting. Eventually, one of this male gang feels powerful enough to challenge the resident male. If he wins, he takes over the female group, and their babies. He knows they’re not his, and his time is short before he gets booted out by the next tough guy. Further, the females aren’t ovulating because they’re nursing their kids. The whole aim is to pass on his genes (this is individual rather than kin selection), so his best course of action is to kill the babs, get the females ovulating as quickly as possible, and impregnate them himself. 

Canto: Yes, but it gets more complicated, because the females have just as much interest in passing on their genes as the male, and a bird in the hand is worth two in the bush…

Jacinta: Let me see, a babe in your arms is worth a thousand erections?

Canto: More or less precisely. So they fight the male to protect their infants, and can even go into ‘fake’ estrus, and mate with the male, fooling the dumb cluck into thinking he’s a daddy. 

Jacinta: And since Hrdy’s work, infanticide of this kind has been documented in well over 100 species, even though it can sometimes threaten the species’ survival, as in the case of mountain gorillas. So much for group selection.

Canto: So now to kin selection. Here are some facts. If you have an identical twin your genome is identical with hers. If you have a full sibling you’re sharing 50% and with a half-sibling 25%. As you can see, the mathematics of genes and relatedness can be widened out to great degrees of complexity. And since this is all about passing on all, or most, or some of your genes, it means that ‘in countless species, whom you co-operate with, compete with, or mate with depends on their degree of relatedness to you’, to quote Sapolsky. 

Jacinta: Yes, so here’s a term to introduce and then fairly promptly forget about: allomothering. This is when a mother of a newborn enlists the assistance of another female in the process of child-rearing. It’s a commonplace among primate species, but also occurs in many bird species. The mother herself benefits from an occasional rest, and the allomother, more often than not a younger relation such as the mother’s kid sister, gets to practice mothering. 

Canto: So this is part of what is called ‘inclusive fitness’, where, in this case, the kid gets all-day mothering (if of varying quality) the kid sister gets to learn about mothering, thereby increasing her fitness when the time comes, and the mother gets a rest to recharge her batteries for future mothering. It’s hopefully win-win-win. 

Jacinta: Yes, there are negatives and positives to altruistic behaviour, but according to Hamilton’s Rule, r.B > C, kin selection favours altruism when the reproductive success of relatives is greater than the cost to the altruistic individual. 

Canto: To explain that rule, r equals degree of relatedness between the altruist and the beneficiary (aka coefficient of relatedness), B is the benefit (measured in offspring) to the recipient, and C is the cost to the altruist. What interests me most, though, about this kin stuff, is how other, dumb primates know who is their kin. Sapolsky describes experiments with wild vervet monkeys by Dorothy Cheney and Robert Seyfarth which show that if monkey A behaves badly to monkey B, this will adversely affect B’s behaviour towards A’s relatives, as well as B’s relatives’ behaviour to A, as well as B’s relatives’ behaviour to A’s relatives. How do they all know who those relatives are? Good question. The same researchers proved this recognition by playing a recording of a juvenile distress call to a group of monkeys hanging around. The female monkeys all looked at the mother of the owner of that distress call to see what she would do. And there were other experiments of the sort. 

Jacinta: And even when we can’t prove knowledge of kin relations (kin recognition) among the studied animals, we find their actual behaviour tends always to conform to Hamilton’s Rule. Or almost always… In any case there are probably other cues, including odours, which may be unconsciously sensed, which might aid in inclusive fitness and also avoiding inbreeding. 

Canto: Yes and It’s interesting how this closeness, this familiarity, breeds contempt in some ways. Among humans too. Well, maybe not contempt but we tend not to be sexually attracted to those we grow up with and, for example, take baths with as kids, whether or not they’re related to us. But I suppose that has nothing to do with kin selection. And yet…

Jacinta: And yet it’s more often than not siblings or kin that we have baths with. As kids. But getting back to odours, we have more detail about that, as described in Sapolski. Place a mouse in an enclosed space, then introduce two other mice, one unrelated to her, another a full sister from another litter, never encountered before. The mouse will hang out with the sister. This is called innate recognition, and it’s due to olfactory signatures. Pheromones. From proteins which come from genes in the major histocompatibility complex (MHC). 

Canto: Histowhat?

Jacinta: Okay, you know histology is the study of bodily tissues, so think of the compatibility or otherwise of tissues that come into contact. Immunology.  Recognising friend or foe, at the cellular, subcellular level. The MHC, this cluster of genes, kicks off the production of proteins which produce pheromones with a unique odour, and because your relatives have similar MHC genes, they’re treated as friends because they have a similar olfactory signature. Which doesn’t mean the other mouse in the enclosure is treated as a foe. It’s a mouse, after all. But other animals have their own olfactory signatures, and that’s another story. 

Canto: And there are other forms of kin recognition. Get this – birds recognise their parents from the songs sung to them before they were hatched. Birds have distinctive songs, passed down from father to son, since its mostly the males that do the singing. And as you get to more complex species, such as primates – though maybe they’re not all as complex as some bird species – there might even be a bit of reasoning involved, or at least consciousness of what’s going on. 

Jacinta: So that’s kin selection, but can’t we superior humans rise above that sort of thing? Australians marry Japanese, or have close friendships with Nigerians, at least sometimes. 

Canto: Sometimes, and this is the point. Kinship selection is an important factor in shaping behaviour and relations, but it’s one of a multiple of factors, and they all have differential influences in different individuals. It’s just that such influences may go below the level of awareness, and being aware of the factors shaping our behaviour is always the key, if we want to understand ourselves and everyone else, human or non-human.

Jacinta: Good to stop there. As we’ve said, much of our understanding has come from reading Sapolsky’s Behave, because we’re old-fashioned types who still read books, but I’ve just discovered that there’s a whole series of lectures by Sapolsky, about 25, on human behaviour, which employs the same structure as the book (which is clearly based on the lectures), and is available on youtube here. So all that’s highly recommended, and we’ll be watching them.

References

R Sapolski, Behave: the biology of humans at our best and worst. Bodley Head, 2017

https://www.britannica.com/science/animal-behavior/Function#ref1043131

https://en.wikipedia.org/wiki/Kin_selection

https://en.wikipedia.org/wiki/Eusociality

 

 

 

 

 

more about ozone, and the earth’s greatest extinction event

leave a comment »

the Siberian Traps are layers of flood basalt covering an area of 2 million square kilometres

Ozone, or trioxygen (O3), an unstable molecule which is regularly produced and destroyed by the action of sunlight on O2, is a vital feature in our atmosphere. It protects life on earth from the harmful effects of too much UV radiation, which can contribute to skin cancers in humans, and genetic abnormalities in plant life. In a previous post I wrote about the discovery of the ozone shield, and the hole above Antarctica, which we seem to be reducing – a credit to human global co-operation. In this post I’m going to try and get my head around whether or not ozone depletion played a role in the so-called end-Permian extinction of some 250 mya. 

I first read of this theory in David Beerling’s 2009 book The emerald planet, but recent research appears to have backed up Beerling’s scientific speculations – though speculation is too weak a word. Beerling is a world-renowned geobiologist and expert on historical global climate change. He’s also a historian of science, and in ‘An ancient ozone catastrophe?’, chapter 4 of The emerald planet, he describes the discovery and understanding of ozone through the research of Robert Strutt, Christian Schönbein, Marie Alfred Cornu, Walter Hartley, George Dobson, Sidney Chapman and Paul Crutzen, among others. He goes on to describe the ozone hole discovery in the 70s and 80s, before focusing on research into the possible effects of previous events – the Tunguska asteroid strike of 1908, the Mount Pinatubo eruption of 1991 and others – on atmospheric ozone levels, and then homes in on the greatest extinction event in the history of our planet – the end-Permian mass extinction, ‘the Great Dying’, which wiped out some 95% of all species then existing.

According to Beerling, it was an international team of palaeontologists led by Henk Visscher at the University of Utrecht who first made the claim that stratospheric ozone had substantially reduced in the end-Permian. They hypothesised that, due to the greatest volcanic eruptions in Earth history, which created the Siberian Traps (layers of solidified basalt covering a huge area of northern Russia), huge deposits of coal and salt, the largest on Earth, were disrupted:


The widespread heating of these sediments and the action of hot groundwater dissolving the ancient salts, was a subterranean pressure cooker synthesising a class of halogenated compounds called organohalogens, reactive chemicals that can participate in ozone destruction. And in less than half a million years, this chemical reactor is envisaged to have synthesised and churned out sufficiently large amounts of organohalogens to damage the ozone layer worldwide to create an intense increased flux of UV radiation.

However, Beerling questions this hypothesis and considers that it may have been the eruptions themselves, which lasted 2 million years and occurred at the Permian-Triassic boundary 250-252 mya, rather than their impact on salt deposits, that did the damage. There’s evidence that many of the eruptions originated from as deep as 10 kilometres below the surface, injected explosively enough to reach the stratosphere, and that these plumes contained substantial amounts of chlorine. 

More recent research, published this year, has further substantiated Visscher’s team’s finding regarding genetic mutations in ancient conifers and lycopsids, and their probable connection with UV radiation enabled by ozone destruction. The mutations were global and dated to the same period. Laboratory experiments exposing related modern plants to bursts of UV radiation have produced more or less identical spore mutations.

The exact chain of events linking the eruptions to the ozone destruction have yet to be worked out, and naturally there’s a lot of scientific argy-bargy going on, but the whole story, even considering that it occurred so far in the past is a reminder of the fragility of that part of our planet that most concerns us – the biosphere. The eruptions clearly altered atmospheric chemistry and temperature. Isotopic measurements of oxygen in sea water suggest that equatorial waters reached more than 40°C. As can be imagined, this had killer effects on multiple species. 

So, we’re continuing to gain knowledge on the ozone shield and its importance, and fragility. I don’t know that there are too many ozone hole skeptics around (I don’t want to look too hard), but if we could only get the same kind of apparent near-unanimity with regard to anthropogenic global warming, that would be great progress. 

Written by stewart henderson

October 10, 2018 at 3:15 pm

on the explosion of battery research – part one, some basic electrical concepts, and something about solid state batteries…

leave a comment »

just another type of battery technology not mentioned in this post

Okay I was going to write about gas prices in my next post but I’ve been side-tracked by the subject of batteries. Truth to tell, I’ve become mildly addicted to battery videos. So much seems to be happening in this field that it’s definitely affecting my neurotransmission.

Last post, I gave a brief overview of how lithium ion batteries work in general, and I made mention of the variety of materials used. What I’ve been learning over the past few days is that there’s an explosion of research into these materials as teams around the world compete to develop the next generation of batteries, sometimes called super-batteries just for added exhilaration. The key factors in the hunt for improvements are energy density (more energy for less volume), safety and cost.

To take an example, in this video describing one company’s production of lithium-ion batteries for electric and hybrid vehicles, four elements are mentioned – lithium, for the anode, a metallic oxide for the cathode, a dry solid polymer electrolyte and a metallic current collector. This is confusing. In other videos the current collectors are made from two different metals but there’s no mention of this here. Also in other videos, such as this one, the anode is made from layered graphite and the cathode is made from a lithium-based metallic oxide. More importantly, I was shocked to hear of the electrolyte material as I thought that solid electrolytes were still at the experimental stage. I’m on a steep and jagged learning curve. Fact is, I’ve had a mental block about electricity since high school science classes, and when I watch geeky home-made videos talking of volts, amps and watts I have no trouble thinking of Alessandro Volta, James Watt and André-Marie Ampère, but I have no idea of what these units actually measure. So I’m going to begin by explaining some basic concepts for my own sake.

Amps

Metals are different from other materials in that electrons, those negatively-charged sub-atomic particles that buzz around the nucleus, are able to move between atoms. The best metals in this regard, such as copper, are described as conductors. However, like-charged electrons repel each other so if you apply a force which pushes electrons in a particular direction, they will displace other electrons, creating a near-lightspeed flow which we call an electrical current. An amp is simply a measure of electron flow in a current, 1 ampere being 6.24 x 10¹8 (that’s the power of eighteen) per second. Two amps is twice that, and so on. This useful video provides info on a spectrum of currents, from the tiny ones in our mobile phone antennae to the very powerful ones in bolts of lightning. We use batteries to create this above-mentioned force. Connecting a battery to, say, a copper wire attached to a light bulb causes the current to flow to the bulb – a transfer of energy. Inserting a switch cuts off and reconnects the circuit. Fuses work in a similar way. Fuses are rated at a particular ampage, and if the current is too high, the fuse will melt, breaking the circuit. The battery’s negative electrode, or anode, drives the current, repelling electrons and creating a cascade effect through the wire, though I’m still not sure how that happens (perhaps I’ll find out when I look at voltage or something).

Volts

So, yes, volts are what push electrons around in an electric current. So a voltage source, such as a battery or an adjustable power supply, as in this video, produces a measurable force which applied to a conductor creates a current measurable in amps. The video also points out that voltage can be used as a signal, representing data – a whole other realm of technology. So to understand how voltage does what it does, we need to know what it is. It’s the product of a chemical reaction inside the battery, and it’s defined technically as a difference in electrical potential energy, per unit of charge, between two points. Potential energy is defined as ‘the potential to do work’, and that’s what a battery has. Energy – the ability to do work – is a scientific concept, which we measure in joules. A battery has electrical potential energy, as result of the chemical reactions going on inside it (or the potential chemical reactions? I’m not sure). A unit of charge is called a coulomb. One amp of current is equal to one coulomb of charge flowing per second. This is where it starts to get like electrickery for me, so I’ll quote directly from the video:

When we talk about electrical potential energy per unit of charge, we mean that a certain number of joules of energy are being transferred for every unit of charge that flows.

So apparently, with a 1.5 volt battery (and I note that’s your standard AA and AAA batteries), for every coulomb of charge that flows, 1.5 joules of energy are transferred. That is, 1.5 joules of chemical energy are being converted to electrical potential energy (I’m writing this but I don’t really get it). This is called ‘voltage’. So for every coulomb’s worth of electrons flowing, 1.5 joules of energy are produced and carried to the light bulb (or whatever), in that case producing light and heat. So the key is, one volt equals one joule per coulomb, four volts equals 4 joules per coulomb… Now, it’s a multiplication thing. In the adjustable power supply shown in the video, one volt (or joule per coulomb) produced 1.8 amps of current (1.8 coulombs per second). For every coulomb, a joule of energy is transferred, so in this case 1 x 1.8 joules of energy are being transferred every second. If the voltage is pushed up to two (2 joules per coulomb), it produces around 2 amps of current, so that’s 2 x 2 joules per second. Get it? So a 1.5 volt battery indicates that there’s a difference in electrical potential energy of 1.5 volts between the negative and positive terminals of the battery.

Watts

A watt is a unit of power, and it’s measured in joules per second. One watt equals one joule per second. So in the previous example, if 2 volts of pressure creates 2 amps of current, the result is that four watts of power are produced (voltage x current = power). So to produce a certain quantity of power, you can vary the voltage and the current, as long as the multiplied result is the same. For example, highly efficient LED lighting can draw more power from less voltage, and produces more light per watt (incandescent bulbs waste more energy in heat).

Ohms and Ohm’s law

The flow of electrons, the current, through a wire, may sometimes be too much to power a device safely, so we need a way to control the flow. We use resistors for this. In fact everything, including highly conductive copper, has resistance. The atoms in the copper vibrate slightly, hindering the flow and producing heat. Metals just happen to have less resistance than other materials. Resistance is measured in ohms (Ω). Less than one Ω would be a very low resistance. A mega-ohm (1 million Ω) would mean a very poor conductor. Using resistors with particular resistance values allows you to control the current flow. The mathematical relations between resistance, voltage and current are expressed in Ohm’s law, V = I x R, or R = V/I, or I = V/R (I being the current in amps). Thus, if you have a voltage (V) of 10, and you want to limit the current (I) to 10 milli-amps (10mA, or .01A), you would require a value for R of 1,000Ω. You can, of course, buy resistors of various values if you want to experiment with electrical circuitry, or for other reasons.

That’s enough about electricity in general for now, though I intend to continue to educate myself little by little on this vital subject. Let’s return now to the lithium-ion battery, which has so revolutionised modern technology. Its co-inventor, John Goodenough, in his nineties, has led a team which has apparently produced a new battery that is a great improvement on ole dendrite-ridden lithium-ion shite. These dendrites appear when the Li-ion batteries are charged too quickly. They’re strandy things that make their way through the liquid electrolyte and can cause a short-circuit. Goodenough has been working with Helena Braga, who has developed a solid glass electrolyte which has eliminated the dendrite problem. Further, they’ve replaced or at least modified the lithium metal oxide and the porous carbon electrodes with readily available sodium, and apparently they’re using much the same material for the cathode as the anode, which doesn’t make sense to many experts. Yet apparently it works, due to the use of glass, and only needs to be scaled up by industry, according to Braga. It promises to be cheaper, safer, faster-charging, more temperature-resistant and more energy dense than anything that has gone before. We’ll have to wait a while, though, to see what peer reviewers think, and how industry responds.

Now, I’ve just heard something about super-capacitors, which I suppose I’ll have to follow up on. And I’m betting there’re more surprises lurking in labs around the world…

 

 

Written by stewart henderson

July 29, 2017 at 4:00 pm

the strange world of the self-described ‘open-minded’ part two

leave a comment »

  • That such a huge number of people could seriously believe that the Moon landings were faked by a NASA conspiracy raises interesting questions – maybe more about how people think than anything about the Moon landings themselves. But still, the most obvious question is the matter of evidence. 

Philip Plait,  from ‘Appalled at Apollo’, Chapter 17 of Bad Astronomy

the shadows of astronauts Dave Scott and Jim Irwin on the Moon during the 1971 Apollo 15 mission - with thanks to NASA, which recently made thousands of Apollo photos available to the public through Flickr

the shadows of astronauts Dave Scott and Jim Irwin on the Moon during the 1971 Apollo 15 mission – with thanks to NASA, which recently made thousands of Apollo photos available to the public through Flickr

So as I wrote in part one of this article, I remember well the day of the first Moon landing. I had just turned 13, and our school, presumably along with most others, was given a half-day off to watch it. At the time I was even more amazed that I was watching the event as it happened on TV, so I’m going to start this post by exploring how this was achieved, though I’m not sure that this was part of the conspiracy theorists’ ‘issues’ about the missions. There’s a good explanation of the 1969 telecast here, but I’ll try to put it in my own words, to get my own head around it.

I also remember being confused at the time, as I watched Armstrong making his painfully slow descent down the small ladder from the lunar module, that he was being recorded doing so, sort of side-on (don’t trust my memory!), as if someone was already there on the Moon’s surface waiting for him. I knew of course that Aldrin was accompanying him, but if Aldrin had descended first, why all this drama about ‘one small step…’? – it seemed a bit anti-climactic. What I didn’t know was that the whole thing had been painstakingly planned, and that the camera recording Armstrong was lowered mechanically, operated by Armstrong himself. Wade Schmaltz gives the low-down on Quora:

The TV camera recording Neil’s first small step was mounted in the LEM [Lunar Excursion Module, aka Lunar Module]. Neil released it from its cocoon by pulling a cable to open a trap door prior to exiting the LEM that first time down the ladder.

Neil Armstrong, touching down on the Moon -an image I'll never forget

Neil Armstrong, touching down on the Moon – an image I’ll never forget

 

the camera used to capture Neil Armstrong's descent

the camera used to capture Neil Armstrong’s descent

As for the telecast, Australia played a large role. Here my information comes from Space Exploration Stack Exchange, a Q and A site for specialists as well as amateur space flight enthusiasts.

Australia was one of three continents involved in the transmissions, but it was the most essential. Australia had two tracking stations, one near Canberra and the other at the Parkes Radio Observatory west of Sydney. The others were in the Mojave Desert, California, and in Madrid, Spain. The tracking stations in Australia had a direct line on Apollo’s signal. My source quotes directly from NASA:

The 200-foot-diameter radio dish at the Parkes facility managed to withstand freak 70 mph gusts of wind and successfully captured the footage, which was converted and relayed to Houston.

iclez

Needless to say, the depictions of Canberra and Sydney aren’t geographically accurate here!

And it really was pretty much ‘as it happened’, the delay being less than a minute. The Moon is only about a light-second away, but there were other small delays in relaying the signal to TV networks for us all to see.

So now to the missions and the hoax conspiracy. But really, I won’t be dealing with the hoax stuff directly, because frankly it’s boring. I want to write about the good stuff. Most of the following comes from the ever-more reliable Wikipedia – available to all!

The ‘space race’ between the Soviet Union and the USA can be dated quite precisely. It began in July 1956, when the USA announced plans to launch a satellite – a craft that would orbit the Earth. Two days later, the Soviet Union announced identical plans, and was able to carry them out a little over a year later. The world was stunned when Sputnik 1 was launched on October 4 1957. Only a month later, Laika the Muscovite street-dog was sent into orbit in Sputnik 2 – a certain-death mission. The USA got its first satellite, Explorer 1, into orbit at the end of January 1958, and later that year the National Aeronautics and Space Administraion (NASA) was established under Eisenhower to encourage peaceful civilian developments in space science and technology. However the Soviet Union retained the initiative, launching its Luna program in late 1958, with the specific purpose of studying the Moon. The whole program, which lasted until 1976, cost some $4.5 billion and its many failures were, unsurprisingly, shrouded in secrecy. The first three Luna rockets, intended to land, or crash, on the Moon’s surface, failed on launch, and the fourth, later known as Luna 1, was given the wrong trajectory and sailed past the Moon, becoming the first human-made satellite to take up an independent heliocentric orbit. That was in early January 1959 – so the space race, with its focus on the Moon, began much earlier than many people realise, and though so much of it was about macho one-upmanship, important technological developments resulted, and vital observations were made, including measurements of energetic particles in the outer Van Allen belt. Luna 1 was the first spaceship to achieve escape velocity, the principle barrier to landing a vessel on the Moon.

After another launch failure in June 1959, the Soviets successfully launched the rocket later known as Luna 2 in September that year. Its crash landing on the Moon was a great success, which the ‘communist’ leader Khrushchev was quick to ‘capitalise’ on during his only visit to the USA immediately after the mission. He handed Eisenhower replicas of the pennants left on the Moon by Luna 2. And there’s no doubt this was an important event, the first planned impact of a human-built craft on an extra-terrestrial object, almost 10 years before the Apollo 11 landing.

The Luna 2 success was immediately followed only a month later by the tiny probe Luna 3‘s flyby of the far side of the Moon, which provided the first-ever pictures of its more mountainous terrain. However, these two missions formed the apex of the Luna enterprise, which experienced a number of years of failure until the mid-sixties. International espionage perhaps? I note that James Bond began his activities around this time.

the Luna 3 space probe (or is it H G Wells' time machine?)

the Luna 3 space probe (or is it H G Wells’ time machine?)

The Luna Program wasn’t the only only one being financed by the Soviets at the time, and the Americans were also developing programs. Six months after Laika’s flight, the Soviets successfully launched Sputnik 3, the fourth successful satellite after Sputnik 1 & 2 and Explorer 1. The important point to be made here is that the space race, with all its ingenious technical developments, began years before the famous Vostok 1 flight that carried a human being, Yuri Gagarin, into space for the first time, so the idea that the technology wasn’t sufficiently advanced for a moon landing many years later becomes increasingly doubtful.

Of course the successful Vostok flight in April 1961 was another public relations coup for the Soviets, and it doubtless prompted Kennedy’s speech to the US Congress a month later, in which he proposed that “this nation should commit itself to achieving the goal, before this decade is out, of landing a man on the Moon and returning him safely to the Earth.”

So from here on in I’ll focus solely on the USA’s moon exploration program. It really began with the Ranger missions, which were conceived (well before Kennedy’s speech and Gagarin’s flight) in three phases or ‘blocks’, each with different objectives and with increasingly sophisticated system design. However, as with the Luna missions, these met with many failures and setbacks. Ranger 1 and Ranger 2 failed on launch in the second half of 1961, and Ranger 3, the first ‘block 2 rocket’, launched in late January 1962, missed the Moon due to various malfunctions, and became the second human craft to take up a heliocentric orbit. The plan had been to ‘rough-land’ on the Moon, emulating Luna 2 but with a more sophisticated system of retrorockets to cushion the landing somewhat. The Wikipedia article on this and other missions provides far more detail than I can provide here, but the intensive development of new flight design features, as well as the use of solar cell technology, advanced telemetry and communications systems and the like really makes clear to me that both competitors in the space race were well on their way to having the right stuff for a manned moon landing.

I haven’t even started on the Apollo missions, and I try to give myself a 1500-word or so limit on posts, so I’ll have to write a part 3! Comment excitant!

The Ranger 4 spacecraft was more or less identical in design to Ranger 3, with the same impact-limiter – made of balsa wood! – atop the lunar capsule. Ranger 4 went through preliminary testing with flying colours, the first of the Rangers to do so. However the mission itself was a disaster, as the on-board computer failed, and no useful data was returned and none of the preprogrammed actions, such as solar power deployment and high-gain antenna utilisation, took place. Ranger 4 finally impacted the far side of the Moon on 26 April 1962, becoming the first US craft to land on another celestial body. Ranger 5 was launched in October 1962 at a time when NASA was under pressure due to the many failures and technical problems, not only with the Ranger missions, but with the Mariner missions, Mariner 1 (designed for a flyby mission to Venus) having been a conspicuous disaster. Unfortunately Ranger 5 didn’t improve matters, with a series of on-board and on-ground malfunctions. The craft missed the Moon by a mere 700 kilometres. Ranger 6, launched well over a year later, was another conspicuous failure, as its sole mission was to send high-quality photos of the Moon’s surface before impact. Impact occurred, and overall the flight was the smoothest one yet, but the camera system failed completely.

There were three more Ranger missions. Ranger 7, launched in July 1964, was the first completely successful mission of the series. Its mission was the same as that of Ranger 6, but this time over 4,300 photos were transmitted during the final 17 minutes of flight. These photos were subjected to much scrutiny and discussion, in terms of the feasibility of a soft landing, and the general consensus was that some areas looked suitable, though the actual hardness of the surface couldn’t be determined for sure. Miraculously enough, Ranger 8, launched in February 1965, was also completely successful. Again its sole mission was to photograph the Moon’s surface, as NASA was beginning to ready itself for the Apollo missions. Over 7,000 good quality photos were transmitted in the final 23 minutes of flight. The overall performance of the spacecraft was hailed as ‘excellent’, and its impact crater was photographed two years later by Lunar Orbiter 4. And finally Ranger 9 made it three successes in a row, and this time the camera’s 6,000 images were broadcast live to viewers across the United States. The date was March 24, 1965. The next step would be that giant one.

A Ranger 9 image showing rilles - long narrow depressions - on the Moon's surface

A Ranger 9 image showing rilles – long narrow depressions – on the Moon’s surface

the strange world of the self-described ‘open-minded’ – part one

leave a comment »

my copy - a stimulating and fun read, great fodder for closed-minded types, come moi

my copy – a stimulating and fun read, great fodder for closed-minded types, comme moi

I’ve just had my first ever conversation with someone who at least appears to be sceptical of the Apollo 11 moon landing of 1969 – and, I can only suppose, the five subsequent successful moon landings. Altogether, twelve men walked on the moon between 20 July 1969 and December 10 1972, when the crew members of Apollo 17 left the moon’s surface. Or so the story goes.

This conversation began when I said that perhaps the most exciting world event I’ve experienced was that first moon landing, watching Neil Armstrong possibly muffing the lines about one small step for a man, and marvelling that it could be televised. I was asked how I knew that it really happened. How could I be so sure?

Of course I had no immediate answer. Like any normal person, I have no immediate, or easy, answer to a billion questions that might be put to me. We take most things on trust, otherwise it would be a very very painstaking existence. I didn’t mention that, only a few months before, I’d read Phil Plait’s excellent book Bad Astronomy, subtitled Misconceptions and misuses revealed, from astrology to the moon landing ‘hoax’. Plait is a professional astronomer who maintains the Bad Astronomy blog and he’s much better equipped to handle issues astronomical than I am, but I suppose I could’ve made a fair fist of countering this person’s doubts if I hadn’t been so flabbergasted. As I said, I’d never actually met someone who doubted these events before. In any case I don’t think the person was in any mood to listen to me.

Only one reason for these doubts was offered. How could the lunar module have taken off from the moon’s surface? Of course I couldn’t answer, never having been an aeronautical engineer employed by NASA, or even a lay person nerdy enough to be up on such matters, but I did say that the moon’s minimal gravity would presumably make a take-off less problematic than, say, a rocket launch from Mother Earth, and this was readily agreed to. I should also add that the difficulties, whatever they might be, of relaunching the relatively lightweight lunar modules – don’t forget there were six of them – didn’t feature in Plait’s list of problems identified by moon landing skeptics which lead them to believe that the whole Apollo adventure was a grand hoax.

So, no further evidence was proffered in support of the hoax thesis. And let’s be quite clear, the claim, or suggestion, that the six moon landings didn’t occur, must of necessity be a suggestion that there was a grand hoax, a conspiracy to defraud the general public, one involving tens of thousands of individuals, all of whom have apparently maintained this fraud over the past 50 years. A fraud perpetrated by whom, exactly?

My conversation with my adversary was cut short by a third person, thankfully, but after the third person’s departure I was asked this question, or something like it: Are you prepared to be open-minded enough to entertain the possibility that the moon landing didn’t happen, or are you completely closed-minded on the issue?

Another way of putting this would be: Why aren’t you as open-minded as I am?

So it’s this question that I need to reflect on.

I’ve been reading science magazines on an almost daily basis for the past thirty-five years. Why?

But it didn’t start with science. When I was kid, I loved to read my parents’ encyclopaedias. I would mostly read history, learning all about the English kings and queens and the battles and intrigues, etc, but basically I would stop at any article that took my fancy – Louis Pasteur, Marie Curie, Isaac Newton as well as Hitler, Ivan the Terrible and Cardinal Richelieu. Again, why? I suppose it was curiosity. I wanted to know about stuff. And I don’t think it was a desire to show off my knowledge, or not entirely. I didn’t have anyone to show off to – though I’m sure I wished that I had. In any case, this hunger to find things out, to learn about my world – it can hardly be associated with closed-mindedness.

The point is, it’s not science that’s interesting, it’s the world. And the big questions. The question – How did I come to be who and where I am?  – quickly becomes – How did life itself come to be? – and that extends out to – How did matter come to be? The big bang doesn’t seem to explain it adequately, but that doesn’t lead me to imagine that scientists are trying to trick us. I understand, from a lifetime of reading, that the big bang theory is mathematically sound and rigorous, and I also know that I’m far from alone in doubting that the big bang explains life, the universe and everything. Astrophysicists, like other scientists, are a curious and sceptical lot and no ‘ultimate explanation’ is likely to satisfy them. The excitement of science is that it always raises more questions than answers, it’s the gift that keeps on giving, and we have human ingenuity to thank for that, as we’re the creators of science, the most amazing tool we’ve ever developed.

But let me return to open-mindedness and closed-mindedness. During the conversation described above, it was suggested that the USA simply didn’t have the technology to land people on the moon in the sixties. So, ok, I forgot this one: two reasons put forward – 1, the USA didn’t have the technological nous; 2, the modules couldn’t take off from the moon (later acknowledged to be not so much of an issue). I pretty well knew this first reason to be false. Of course I’ve read, over the years, about the Apollo missions, the rivalry with the USSR, the hero-worship of Yuri Gagarin and so forth. I’ve also absorbed, in my reading, much about spaceflight and scientific and technological development over the years. Of course, I’ve forgotten most of it, and that’s normal, because that’s how our brains work – something I’ve also read a lot about! Even the most brilliant scientists are unlikely to be knowledgeable outside their own often narrow fields, because neurons that fire together wire together, and it’s really hands-on work that gets those neurons firing.

But here’s an interesting point. I have in front of me the latest issue of Cosmos magazine, issue 75. I haven’t read it yet, but I will do. On my shelves are the previous 74 issues, each of which I’ve read, from cover to cover. I’ve also read more than a hundred issues of the excellent British mag, New Scientist. The first science mag I ever read was the monthly Scientific American, which I consumed with great eagerness for several years in the eighties, and I still buy their special issues sometimes. Again, the details of most of this reading are long forgotten, though of course I learned a great deal about scientific methods and the scientific mind-set. The interesting point, though, is this. In none of these magazines, and in none of the books, blogs and podcasts I’ve consumed in about forty years of interest in matters scientific, have I ever read the claim, put forward seriously, that the moon landings were faked. Never. I’m not counting of course, books like Bad Astronomy and podcasts like the magnificent Skeptics’ Guide to the Universe, in which such claims are comprehensively debunked.

The SGU podcast - a great source for exciting science developments, criticism of science reporting, and debunking of pseudo-science

The SGU podcast – a great source for exciting science developments, criticism of science reporting, and debunking of pseudo-science

Scientists are a skeptical and largely independent lot, no doubt about it, and I’ve stated many times that scepticism and curiosity are the twin pillars of all scientific enquiry. So the idea that scientists could be persuaded, or cowed into participating in a conspiracy (at whose instigation?) to hoodwink the public about these landings is – well let’s just call it mildly implausible.

But of course, it could explain the US government’s massive deficit. That’s it! All those billions spent on hush money to astronauts, engineers, technicians (or were they all just actors?), not to mention nosey reporters, science writers and assorted geeks – thank god fatty Frump is here to make America great again and lift the lid on this sordid scenario, like the great crusader against fake news that he is.

But for now let’s leave the conspiracy aspect of this matter aside, and return to the question of whether these moon landings could ever have occurred in the late sixties and early seventies. I have to say, when it was put to me, during this conversation, that the technology of the time wasn’t up to putting people on the moon, my immediate mental response was to turn this statement into a question. Was the technology of the time up to it? And this question then turns into a research project. In other words, let’s find out, let’s do the research. Yay! That way, we’ll learn lots of interesting things about aeronautics and rocket fuel and gravitational constraints and astronaut training etc, etc – only to forget most of it after a few years. Yet, with all due respect, I’m quite sure my ‘adversary’ in this matter would never consider engaging in such a research project. She would prefer to remain ‘open-minded’. And if you believe that the whole Apollo project was faked, why not believe that all that’s been written about it before and since has been faked too? Why believe that the Russians managed to get an astronaut into orbit in the early sixties? Why believe that the whole Sputnik enterprise was anything but complete fakery? Why believe anything that any scientist ever says? Such radical ‘skepticism’ eliminates the need to do any research on anything.

But I’m not so open-minded as that, so in my dogmatic and doctrinaire fashion I will do some – very limited – research on that very exciting early period in the history of space exploration. I’ll report on it next time.

Written by stewart henderson

February 25, 2017 at 12:34 pm