Posts Tagged ‘evolution’
understanding genomics 1 – mitochondrial DNA
Canto: So maybe if we got humans to mate with bonobos we’d get a more promising hybrid offspring?
Jacinta: Haha well it’s not that simple, and I don’t mean just physiologically…
Canto: Okay those species wouldn’t be much attracted to each other – though I’ve heard that New Zealanders are very much attracted to sheep, but that just might be fantasy. But seriously, if two species – like bonobos and chimps, can interbreed, why can’t bonobos and humans? And they’d don’t have to canoodle, you can do it like in vitro fertilisation, right?
Jacinto: Well, bonobos and chimps are much more closely related to each other than they are to humans. And if you think bonobo-human hybridisation will somehow create a female-dominant libertarian society, well – it surely ain’t that simple. What we see in bonobo society is a kind of social evolution, not merely a matter of genetics. But having said that, I’m certainly into exploring genetics and genomics more than I’ve done so far.
Canto: Yes, I’ve been trying to educate myself on alleles, haplotypes, autosomal and mitochondrial DNA, homozygotism and heterozygotism (if there are such words), single nucleotide polymorphisms and…. I’m confused.
Jacinta: Well, let’s see if we can make more sense of the science, starting with, or continuing with Who we are and how we got here, which is mostly about ancient DNA but also tells us much about the past by looking at genetic variation within modern populations. Let me quote at length from Reich’s book, a passage about mitochondrial DNA – the DNA in our mitochondria which is somehow passed down only along female lines. I’ve no idea how that happens, but…
The first startling application of genetics to the study of the past involved mitochondrial DNA. This is a tiny proportion of the genome – only approximately 1/200,000th of it – which is passed down from mother to daughter to granddaughter. In 1987, Allan Wilson and his colleagues sequenced a few hundred letters of mitochondrial DNA from diverse people around the world. By comparing the mutations that were different among these sequences, he and his colleagues were able to construct a family tree of maternal relationships. What they found is that the deepest branch of the tree – the branch that left the main trunk earliest – is found today only in people of sub-Saharan African ancestry, suggesting that the ancestors of modern humans lived in Africa. In contrast, all non-Africans today descend from a later branch of the tree.
Canto: Yes, I can well understand the implications of that analysis, but it skates fairly lightly over the science, understandably for a book aimed at the general public. To be clear, they looked at the same stretches of mitochondrial DNA in diverse people, comparing differences – mutations – among them. And in some there were many mutations, suggesting time differences, due to that molecular clock thing. And I suppose those that differed most – from who? – had sub-Saharan ancestry.
Jacinta: Dating back about 160,000 years, according to best current estimates.
Canto: The science still eludes me. First, how does mitochondrial DNA pass only through the female line? We all have mitochondria, after all.
Jacinta: Okay, I’ve suddenly made made myself an expert. It all has to do with the sperm and the egg. One’s much bigger than the other, as you know, because the egg carries nutrients, including mitochondria, the only organelle in your cytoplasm that has its own DNA. Your own little spermatozoa are basically just packages of nuclear DNA, with a tail. Our mitochondrial DNA appears to have evolved separately from our nuclear DNA because mitochondria, or their ancestors, had a separate existence before being engulfed by the ancestors of our somatic or eukaryotic cells, in a theory that’s generally accepted if difficult to prove. It’s called the endosymbiosis theory.
Canto: So mitochondria probably had a separate, prokaryotic existence?
Jacinta: Most likely, which could take us to the development, the ‘leap’ if you like, of prokaryotic life into the eukaryotic, but we won’t go there. Interestingly, they’ve found that some species have mitochondrion-related organelles with no genome, and our own and other mammalian mitochondria are full of proteins – some 1500 different types – that are coded for by nuclear rather than mitochondrial DNA. Our mitochondrial DNA only codes for 13 different types of protein. It may be that there’s an evolutionary process going on that’s transferring all of our mitochondrial DNA to the nucleus, or there might be an evolutionary reason for why we’re retaining a tiny proportion of coding DNA in the mitochondria.
Canto: So – we’ve explained why mitochondrial DNA follows the female line, next I’d like to know how we trace it back 160,000 years, and can place the soi-disant mitochondrial Eve in sub-Saharan Africa.
Jacinta: Well the term’s a bit Judeo-Christian (there’s also a Y-chromosomal Adam), but she’s the matrilineal most recent common ancestor (mt-MRCA, and ‘Adam’ is designated Y-MRCA).
Canto: But both of these characters had parents and grandparents – who would be somehow just as common in their ancestry but less recent? I want to know more.
Jacinta: To quote Wikipedia…
… she is defined as the most recent woman from whom all living humans descend in an unbroken line purely through their mothers and through the mothers of those mothers, back until all lines converge on one woman.
… but I’m not sure if I understand that convergence. It clearly doesn’t refer to the first female H sapiens, it refers to cell lines, haplogroups and convergence in Africa. One of the cell lines used to pinpoint this convergence was HeLa, the very first and most commonly used cell line for a multiplicity of purposes…
Canto: That’s the Henrietta Lacks cell line! We read The Immortal Life of Henrietta Lacks! What a story!
Jacinta: Indeed. She would be proud, if she only knew… So, after obtaining data from HeLa and another cell line, that of an !Kung woman from Southern Africa, as well as from 145 women from a variety of populations:
The published conclusion was that all current human mtDNA originated from a single population from Africa, at the time dated to between 140,000 and 200,000 years ago.
Canto: So mt-MRCA is really a single population rather than a single person?
Jacinta: Yeah, maybe sorta, but don’t quote me. The Wikipedia article on this gives the impression that it’s been sheeted home to a single person, but it’s vague on the details. Given the way creationists leap on these things, I wish it was made more clear. Anyway the original analysis from the 1980s seems to be still robust as to the time-frame. The key is to work out when all female lineages converge, given varied mutation rates. So, I’m going to quote at length from the Wikipedia article on mt-MRCA, and try to translate it into Jacinta-speak.
Branches are identified by one or more unique markers which give a mitochondrial “DNA signature” or “haplotype” (e.g. the CRS [Cambridge Reference Sequence] is a haplotype). Each marker is a DNA base-pair that has resulted from an SNP [single nucleotide polymorphism] mutation. Scientists sort mitochondrial DNA results into more or less related groups, with more or less recent common ancestors. This leads to the construction of a DNA family tree where the branches are in biological terms clades, and the common ancestors such as Mitochondrial Eve sit at branching points in this tree. Major branches are said to define a haplogroup (e.g. CRS belongs to haplogroup H), and large branches containing several haplogroups are called “macro-haplogroups”.
So let’s explain some terms. A genetic marker is simply a DNA sequence with a known location on a chromosome. A haplotype or haploid genotype is, as the haploid term suggests, inherited from one rather than both parents – in this case a set of alleles inherited together. SNPs or ‘snips’ are differences of a single nucleotide – e.g the exchange of a cytosine (C) with a thymine (T). As to the rest of the above paragraph, I’m not so sure. As to haplogroups, another lengthy quote makes it fairly clear:
A haplogroup is…. a group of similar haplotypes that share a common ancestor with a single-nucleotide polymorphism mutation.More specifically, a haplogroup is a combination of alleles at different chromosomal regions that are closely linked and that tend to be inherited together. As a haplogroup consists of similar haplotypes, it is usually possible to predict a haplogroup from haplotypes. Haplogroups pertain to a single line of descent. As such, membership of a haplogroup, by any individual, relies on a relatively small proportion of the genetic material possessed by that individual.
Canto: Anyway, getting back to mt-MRCA, obviously not as memorable a term as mitochondrial Eve, it seems to be more a concept than a person, if only we could get people to understand that. If you want to go back to the first individual, it would be the first mitochondrion that managed to synthesise with a eukaryotic cell, or vice versa. From the human perspective, mt-MRCA can be best conceptualised as the peak of a pyramid from which all… but then she still had parents, and presumably aunts and uncles…. It just does my head in.
References
https://www.genome.gov/genetics-glossary/Mitochondrial-DNA
https://en.wikipedia.org/wiki/Mitochondrial_Eve
https://en.wikipedia.org/wiki/Haplogroup
the evolution of reason: intellectualist v interactivist
In The Enigma of Reason, cognitive psychologists Hugo Mercier and Dan Sperber ask the question – What is reason for? I won’t go deeply into their own reasoning, I’m more interested in the implications of their conclusions, if correct – which I strongly suspect they are.
They looked at two claims about reason’s development, the intellectualist claim, which I might associate with Aristotelian and symbolic logic, premises and conclusions, and logical fallacies as pointed out by various sceptical podcasts and websites (and this can also be described as an individualist model of reasoning), and the interactionist model, in which reason is most effectively developed collectively.
In effect, the interactionist view is claiming that reason evolved in an interactionist environment. This suggests that it is language-dependent, or that it obviously couldn’t have its full flowering without language. Mercier and Sperber consider the use of reason in two forms – justificatory and argumentative. Justificatory reasoning tends to be lazy and easily satisfied, whereas it is in the realm of argument that reason comes into its own. We can see the flaws in the arguments of others much more readily than we can our own. This accords with the biblical saying about seeing motes in the eyes of others while being blind to the bricks in our own – or something like that. It also accords with our well-attested over-estimation of ourselves, in terms of our looks, our generosity, our physical abilities and so on.
I’m interested in this interactionist view because it also accords with my take on collaboration, participatory democracy and the bonobo way. Bonobos of course don’t have anything like human reason, not having language, but they do work together more collectively than chimps (and chimp-like humans) and show a feeling towards each other which some researchers have described as ‘spiritual’. For me, a better word would be ‘sympathetic’. Seeing the value in others’ arguments helps to take us outside of ourselves and to recognise the contribution others make to our thinking. We may even come to realise how much we rely on others for our personal development, and that we are, for better or worse, part of a larger, enriching whole. A kind of mildly antagonistic but ultimately fulfilling experience.
An important ingredient to the success of interactionist reasoning is the recognition of and respect for difference. That lazy kind of reasoning we engage in when left to ourselves can be exacerbated when our only interactions are with like-minded people. Nowadays we recognise this as a problem with social media and their algorithms. The feelings of solidarity we get with that kind of interaction can of course be very comforting but also stultifying, and they don’t generally lead to clear reasoning. For many, though, the comfort derived from solidarity outweighs the sense of clarity you might, hopefully, get from being made to recognise the flaws in your own arguments. This ghettoisation of reason, like other forms of ghettoisation, is by and large counter-productive. The problem is to prevent this from happening while reducing the ‘culture shock’ that this might entail. Within our own WEIRD (from Western Educated Industrial Rich Democratic countries) culture, where the differences aren’t so vast, being challenged by contrary arguments can be stimulating, even exhilarating. Here’s what the rich pre-industrialist Montaigne had to say on the matter:
The study of books is a languishing and feeble motion that heats not, whereas conversation teaches and exercises at once. If I converse with a strong mind and a rough disputant, he presses upon my flanks, and pricks me right and left; his imaginations stir up mine; jealousy, glory, and contention, stimulate and raise me up to something above myself; and acquiescence is a quality altogether tedious in discourse.
Nevertheless, I’ve met people who claim to hate arguments. They’re presumably not talking about philosophical discourse, but they tend to lump all forms of discord together in a negative basket. Mercier and Sperber, however, present a range of research to show that challenges to individual thinking have an improving effect – which is a good advert for diversity. But even the most basic interactions, for example between mother and child, show this effect. A young child might be asked why she took a toy from her sibling, and answer ‘because I want it’. Her mother will point out that the sibling wants it too, and/or had it first. The impact of this counter-argument may not be immediate, but given normal childhood development, it will be the beginning of the child’s road to developing more effective arguments through social interaction. In such an interactive world, reasons need to much more than purely selfish.
The authors give examples of how the the most celebrated intellects can go astray when insufficiently challenged, from dual Nobel prize-winner Linus Pauling’s overblown claims about vitamin C to Alphonse Bertillon’s ultra-convoluted testimony in favour of Albert Dreyfus’ guilt, to Thomas Jefferson’s absurdly tendentious arguments against emancipation. They also show how the standard fallacious arguments presented in logic classes can be valid under particular circumstances. Perhaps most convincingly they present evidence of how group work in which contentious topics were discussed resulted in improvements in individual essays. Those whose essay-writing was preceded by such group discussion produced more complex arguments for both sides than did those who simply read philosophical texts on the issues.
It might seem strange that a self-professed loner like me should be so drawn to an interactionist view of reason’s development. The fact is, I’ve always seen my ‘lonerdom’ as a failing, which I’ve never tried very hard to rectify. Instead, I’ve compensated by interacting with books and, more recently, podcasts, websites and videos. They’re my ‘people’, correcting and modifying my own views thorough presenting new information and perspectives (and yes, I do sometimes argue and discuss with flesh-and-blood entities). I’ve long argued that we’re the most socially constructed mammals on the planet, but Mercier and Sperber have introduced me to a new word – hypersocial – which packs more punch. This hypersocial quality of humans has undoubtedly made us, for better or worse, the dominant species on the planet. Other species can’t present us with their viewpoints, but we can at least learn from the co-operative behaviours of bonobos, cetaceans, elephants and corvids, to name a few. That’s interaction of a sort. And increased travel and globalisation of communications means we can learn about other cultures and how they manage their environments and how they have coped, or not, with the encroachments of the dominant WEIRD culture.
When I say ‘we’ I mean we, as individuals. The authors of The enigma of reason reject the idea of reason as a ‘group-level adaptation’. The benefits of interactive reason accrue to the individual, and of course this can be passed on to other receptive individuals, but the level of receptivity varies enormously. Myside bias, the default position from our solipsistic childhood, has the useful evolutionary function of self-promotion, even survival, against the world, but our hypersocial human world requires effective interaction. That’s how Australian Aboriginal culture managed to thrive in a set of sub-optimal environments for tens of thousands of years before the WEIRDs arrived, and that’s how WEIRDs have managed to transform those environments, creating a host of problems along with solutions, in a story that continues….
Reference
H Mercier & D Sperber, The enigma of reason, 2017
a bonobo world 33: they don’t wear stillettos

anti-shoes, designed by Leanie van der Vyver
Bonobos don’t wear stilettos. Here’s why.
Bonobos don’t wear anything. But that’s not the end of the story.
Bonobos aren’t bipedal, though they have spurts of bipedalism. Their feet aren’t built for long-term bipedalism, of the kind we have evolved. It’s mostly to do with the big toe. Humans and our ancestors became bipedal after moving out of trees and into savannahs. This along with our hands, the opposable thumb and so forth, helped us in hunting, as we were able to handle and manipulate weaponry, and to outstrip our prey in long-distance running. Losing our body hair and being able to sweat to keep our body temperature down – sweat is about boundary layers, something like evaporative air-conditioning – was also an adaptation to our new hunting lifestyle, as, perhaps was language or proto-language, which would’ve helped us to form groups and bring down a feast of big prey. Goodbye mammoths – too bad we didn’t evolve early enough to sample brontosaurus burgers.
So I imagine we developed solid pads of skin on our soles and heels as we scrambled over scree and bounced through brambles during hunts and childhood play. I experienced a bit of that in my own childhood, in the paths and fields of early Elizabeth (the town was the same age as myself). My heels were hardened in those early barefoot years as they were never to be again.
I suppose it was settlement that softened our feet and led to the idea of covering them for those increasingly rare outings into thorny bushland, or even just out in the fields, for the female and young male gatherers. The first shoes we know of, dating back only 10,000 years, were made of bark. These were, of course, utilitarian. We’re still a while away from stilettos, the ultimate non-utilitarian symbols.
The oldest leather shoes yet found date to c5,500 years ago. We can’t be sure of how old ‘shoes’ were – the first may just have been makeshift coverings, more or less painted on, or bound around and then tossed aside. Clearly they would’ve been more commonly used as we moved to a ‘softer’ more cindoor, village life, and would have become more decorative and status-laden – though, interestingly, gods and heroes were invariably depicted barefoot by the ancient Greeks. The Romans used chiral (left and right) sandals in their armies (though standard chiral footwear is a modern phenomenon), and generally considered it a sign of civilised behaviour to wear shoes regularly, possibly the first people to do so, even if only among the upper class. So it was around this time, a couple of thousand years ago, that shoemaking became a profession.
Fast forward to the 15th century, and the first elevated shoes, designed to keep tender feet above the ordure of urban streets, became popular. These were originally in the form of overshoes or pattens. They protected not only the feet but the decorative, thin-soled poulains, with their long pointy toes, which were de rigueur for the fashionable of both sexes.
These original high-heels, then, were practical and clunky. Made from wood, their noisiness was an issue – mentioned in Shakespeare and Jane Austen – and they were mostly banned in church. More refined high heels were used by the upper classes, aka the well-heeled, especially royalty. Catherine de Medici and England’s Mary 1 wore them to look taller, and France’s Louis XIV banned the wearing of red high heels for everyone except those of his court.
The mass-production of footwear began in the nineteenth century, and so shoes for all sorts of specific purposes became a thing. And so we come to the notorious (for some) stiletto heel.
Named after the much more practical stiletto dagger, the stiletto heel, or shoe, invented by the usual moronic continental fashion types, has come in and out of style over the past century. Interestingly, the Wikipedia article on stilettos has a section on their benefits and disadvantages, with about five or six times more verbiage devoted to the benefits than the disadvantages. I’d love to meet the person who wrote it – while armed with a stiletto. Much of the benefit – according to this expert, lies in postural improvement, a claim completely contradicted by the disadvantages section, unsurprisingly:
All high heels counter the natural functionality of the foot, sometimes causing skeletal and muscular problems if users wear them excessively; such shoes are a common cause of venous complaints such as pain, fatigue, and heavy-feeling legs, and have been found to provoke venous hypertension in the lower limbs.
No mention of the fact that they instantly lower the wearer’s IQ by several points, unfortunately. Where is science when you need it?
Some of the benefits mentioned are risible – e.g. ‘they express your style and make you feel good’. As would going barefoot or wearing clodhoppers, if that’s your style. Another claim is that you can use the heels as a weapon to defend yourself. I mean, wtf? So you ask your assailant to wait while you unstrap your shoe and limpingly lunge at him? Or do you kick him in the nuts while keeping your balance on a square centimetre of padded metal? I’d like to see that.
Another apparent benefit is that they make you look femme fatale tough. I wonder that the military hasn’t considered them as essential for female personnel. While I admit that, in US-style or James Bondage-type movies, the black-leather-clad heroine-villain in matching stilettos and revolver does give me the proverbial kick in the fantasies, the plethora of YouTube videos showing absurdly-heeled models and other victims stumbling on stages and catwalks, their ankles twisted to right angles, provides a thrill of schadenfreude I could do without. A finer thrill, for me, would be to watch vids of the guilty fashion designers being tortured to within an inch of their lives by their own creations.
But let me go on. Our Wikipedia expert writes that the stilettoed look ‘boosts women’s self-confidence and that in turn makes them more likely to get promoted at work’. Now there’s a workplace I’d pay good money not to belong to. The expert goes on to point out the well-attested, but essentially shameful fact that tall people are more likely to get elected to leadership positions. In other words, had Donald Trump been a foot shorter, hundreds of thousands of US lives would surely have been saved in 2020. I should also feel relieved that, as a shorty myself, I’m automatically absolved from any leadership responsibilities.
So why was this claptrap allowed on Wikipedia? It seems that the website, so fabulously rigorous in fields such as maths, physics and biochemistry, has decided to slacken off when it comes to ‘popular culture’, which is both understandable and frustrating. The fact is that stilettos are way more decorative than functional, as is women’s role in the business world, by and large.
I admit that my views on clothing and footwear are heavily influenced by the years of my impressionable youth in the sixties and early seventies, when men sported long, flowing locks, multicoloured shirts and pants, and women mostly the same, though I loved to spot the odd tweedy female in short back and sides, and kickarse Doc Martens. There’s no accounting for taste.
Bonobo females are statistically smaller than males, in much the same proportion as human females. And yet they dominate. There’s nothing more to say.
References
https://en.wikipedia.org/wiki/Shoe
https://en.wikipedia.org/wiki/Stiletto_heel
Pinning down meiosis: sperm, mainly

Canto: Not very long ago I was reading Carl Zimmer’s book She has her mother’s laugh, and he was explaining meiosis. It was exciting, because I think I understood it. Being a regular science reader I’d read about meiosis and mitosis before but I could never remember, or perhaps I never clearly knew, the difference. But this time was different, and I thought ‘Yes!’, or maybe ‘Eureka!’ sounds better, because not only did I get it, or thought I did, but I thought ‘this is a new weapon against those who say they don’t believe in evolution’. There’s a fellow-teacher at my college who actually says this, but I’ve never really confronted her on it, apart from some mutterings.
Jacinta: So please explain yourself. Meiosis and mitosis are about cell division aren’t they?
Canto: Well I can’t explain myself, but at the time I thought ‘here’a new one-word response to those who say they don’t believe in evolution’. The other one-word responses being ‘genes’, ‘genetics’, ‘genomics’ and other variants. Well okay, I can give a partial explanation. Most everyone believes in evolution, that’s why they use smart phones rather than the earlier types of mobile phones or landlines or whatever. That’s why they use dishwashers and modern washing machines and modern computers, and drive modern cars instead of a horse-and-carriage, because evolution just means progressive development. What my fellow-teacher really should be saying is she doesn’t believe in the Darwin-Wallace theory of natural selection from random variation, but she doesn’t say that because I strongly suspect she doesn’t have a clue what that means.
Jacinta: Right, so she doesn’t believe in the particular theory…
Canto: Which is proven by genes, the essential mechanism of random variation, which of course Darwin was completely unaware of. And by meiosis, another essential source of variety.
Jacinta: So, meiosis. It’s quite complex. Zimmer gives a brief explanation as you say, and there’s also a number of videos, from Khan Academy, Crash Course Biology and others, so let’s try to describe it for ourselves, with emphasis on variety or variation, which is the essential thing.
Canto: Mitosis, and hopefully I now will never forget this, is the cell division and replication that goes on in our bodies at every moment, and which enables us to grow from a foetus to a strapping lad or lassie, to heal wounds and even to have multiple times more neurons than old fatty Frump, maybe. It occurs among the somatic cells, and it essentially does it by replicating cells exactly, like replacing or adding to like.
Jacinta: But not exactly, otherwise we’d just be a growing blob of undifferentiated body cells, not liver, brain, blood, skin and other cells. That takes epigenetics, as I recall. Mitotically-created cells are identical as to chromosomes, but not as to expression. But anyway, meiosis. That’s how our germ cells are replicated.
Canto: Egg and sperm cells, together known as gametes. Khan Academy begins its article on meiosis with this:
… meiosis in humans is a division process that takes us from a diploid cell—one with two sets of chromosomes—to haploid cells—ones with a single set of chromosomes. In humans, the haploid cells made in meiosis are sperm and eggs. When a sperm and an egg join in fertilization, the two haploid sets of chromosomes form a complete diploid set: a new genome.
All fine, though this division process is damn complicated as we’ll discover. But what interested me in Zimmer’s account was this, and I’ll quote it at length, because it’s what got me excited about variation:
In men, meiosis takes place within a labyrinth of tubes coiled within the testicles. The tube walls are lined with sperm precursor cells, each carrying two copies of each chromosome, one from the man’s mother, the other from his father. When these cells divide, they copy all their DNA, so that now they have four copies of each chromosome. Rather than drawing apart from each other, however, the chromosomes stay together. A maternal and paternal copy of each chromosome line up alongside each other. Proteins descend on them and slice the chromosomes, making cuts at precisely the same spots.
As the cells repair these self inflicted wounds, a remarkable exchange can take place. A piece of DNA from one chromosome may get moved to the same position in the other, its own place taken by its counterpart. This molecular surgery cannot be rushed. All told, a cell may need three weeks to finish meiosis. Once it’s done, its chromosomes pull away from each other. The cell then divides twice, to make four new sperm cells. Each of the four cells inherits a single copy of all 23 chromosomes. But each sperm cell contains a different assembly of DNA.
Think of this last line – each sperm cell contains a different assembly of DNA.
Jacinta: Yes, and there can be up to a billion sperm cells released in each ejaculate, but who’s counting? And are they all different?
Canto: Apparently so. Even the Daily Mail says so, so it must be true. And when you think of it, if there weren’t differences, each offspring born from that man’s sperm would be a clone…
Jacinta: Not necessarily – what about the egg cells?
Canto: Yes, I believe it’s the same meiosis process with them, though not quite. Anyway, there’s the same mixing of chromosomes, so the chances of any two egg cells, or I should say their chromosomal complement, being identical is extremely small.
Jacinta: So, meiosis – I’ve been trying to pin it all down, but I don’t feel I’ve succeeded. Here goes, anyway. Meiosis is a special type of reproduction, confined only to our germ cells, the sperm and egg cells. The gametes. The haploid cells. As opposed to the diploid cells which make up all the somatic or body cells we have. That’s to say, those cells reproduce differently from diploid, somatic cells. But before I try to explain the complex process of their reproduction, what about their production? Where do these haploid cells come from? Now I might answer glibly that the egg cells, also called oocytes, come from the ovaries, and the spermatozoa come from the testes, but that’s not really my question.
Canto: In fact I’m not even sure if you’ve got it right so far. The egg cell is called an ovum. An oocyte is a precursor egg cell I think. I’m not sure if it matters much, but we’re looking at the production of these gametes. Presumably the kinds of gametes we produce depends on our gender, which is determined at conception? Of course, in these gender-bending days, who knows.
Jacinta: Oh dear. Let’s try not to get confused. Assume an embryo or foetus is straightforwardly male or female, or potentially so. I seem to recall that males only start producing sperm at puberty, whereas females produce all their egg cells before that, and only have a fixed number, and egg cells are quite huge in comparison to sperm, and even compared to your average somatic cells – though some neurons have super-long axons. When females reach the stage of menstruation, that’s when they start releasing eggs.
Canto: Okay in the above quote from Zimmer, sperm precursor cells are mentioned. They’re also called spermatocytes, and the labyrinth of coiled tubes he also mentions are the seminiferous tubules. This is where the meiosis happens, in males. There are two types of spermatocyte, primary and secondary. The primary spermatocytes are diploid, and the secondary, formed after the first meiosis process (meiosis 1), are haploid.
Jacinta: To possibly confuse matters further, there’s a multi-stage process happening in those seminiferous tubules, a process called spermatogenesis. It starts with the spermatogonia (and maybe we’ll leave the spermatogonium’s existence for another post), which are processed into primary spermatocytes, then into secondary spermatocytes, then into spermatids, then to sperm.
Canto: Yes, so the first step you mention is mitotic, with diploid cells creating diploid cells, the primary spermatocytes…
Jacinta: And mitosis has those four steps or phases – PMAT, as students recall it; prophase, metaphase, anaphase and telophase, while meiosis has the same but in two parts, PMAT for meiosis 1 and PMAT for meiosis 2. So as we’ve already pointed out, this double-doubling process has a final result of four new cells. Now, before meiosis 1, the cells go through interphase, but I won’t detail that here. In prophase 1, chromosomes are brought together in pairs, called homologues. Their alleles are aligned together, but then this more or less random ‘crossing over’ occurs, presumably with the aid of some busy little proteins, which mixes the chromosomes up. Each homologue pair can have many of these crossovers. More mixing happens during metaphase 1, when homologue pairs, with their crossings-over, line up randomly at the metaphase plate. I’m not pretending to fully understand all this, but the main point is that the variety we find in the final product, the sperm cells, is brought about essentially during prophase and metaphase in meiosis 1 of the double cycle.
Canto: It does get me more interested in understanding meiosis more fundamentally though, as well as mitosis. The phases and the processes that bring them about, the proteins, the chromatin, the centromeres, the metaphase plate, and of course oogenesis, polar bodies and much much more.
Jacinta: Yes – I think meiosis does point to a lot of the variation in the world of organisms, but it would be hard to get those who ‘don’t believe in evolution’ to think about this and its relevance. They tend not to listen to explanations or to want to make connections.
Canto: You can give up on them or keep plugging away with the ‘what about this?’ or ‘can you explain that?’ Or demonstrate to them directly or indirectly, the results of those powerful explanations, in medicine, in astronomy, in our technology, and in our human relations.
References
She has her mother’s laugh: the powers, perversions and potential of heredity, by Carl Zimmer, 2018
https://en.wikipedia.org/wiki/Meiosis
https://www.khanacademy.org/science/biology/cellular-molecular-biology/meiosis/a/phases-of-meiosis
the second law of thermodynamics – some preliminary thoughts

the essential battle – to be more effectively productive than consumptive
Early on in his book Enlightenment Now, Steven Pinker makes much of the second law of thermodynamics, aka the law of entropy, as something way more than an ordinary law of physics, citing others who’ve claimed the same thing, including Arthur Eddington, C P Snow and Peter Atkins. Soaring rhetoric about pinnacles and ‘without which nought’ tend to be employed, tempting dilettantes come moi to wonder, if it’s so effing over-arching why is it only the second law?
So the first law of T is about conservation of energy, the third is about the impossibility of dropping to absolute zero. Maybe it’s just prosaically about chronology?
Maybe. The first law, first made specific by Rudolf Clausius in 1850 but much refined since, essentially states that in a closed system the internal energy is equal to the amount of heat applied minus the work done on the system’s external environment. Basically, you can’t get more out of the system than you put into it. The second law also involves many contributors, including Sadi Carnot in 1824, and Clausius again in 1850. Pinker attributes its largely up-to-date statistical iteration to the physicist Ludwig Boltzmann, whose work on the law dates to the 1860s and 70s. The third law, which also employs the concept of entropy, wasn’t formulated until the early twentieth century, firstly by the chemist Walter Nernst. So maybe it’s a chronological thing, but it certainly seems uncertain.
Anyway, the mystery attached to its title is just the start for the second law. It’s been formulated in multiple ways by scientists and popularisers. It’s mystical, hard-nosed, ineluctable, basic, obvious, magnificent and, according to Eddington, supreme. Entropy can be applied usefully to everything, from the universe to a cup of coffee and its consumer. The first point to always keep in mind – and for me that’s not easy – is that, left to itself, any system, such as those just mentioned, drifts inexorably from low to high entropy. To put it more succinctly, beds don’t make themselves. This obvious point may seem depressing, and often is, but it opens up the intriguing possibility that, if not left to itself, a bed can be made in many mysterious and inspiring ways. Energy into the system, systematically directed, creates art and science, life and intelligence, natural and synthetic. Natural selection from random variation, as we have so intelligently discovered, provides just such a system, through solar energy complexly distributed.
Of course, before we get too excited, there are problems. Although solar energy is the ultimate ‘without which nought’ of our systematic existence, or at least the emergence of it, we human energumens tamper with and lay waste to a great deal of other complex systems, including what we so euphemistically term ‘livestock’, in order to order ourselves in increasingly ordered, soi-disant civilised ways. From farming to fracking, from radioactive atolls to space debris, we leave many a wreck behind, and it’s still and may always be an open question whether we end up drowning in our own crap, species-wise. Animals are born exploiters, as Pinker writes, and maybe we should celebrate the fact that we’re better at it than other animals. Certainly we need to acknowledge it, with due deference and responsibility, while trying to temper the reckless excitement with which we often set out to do things – though they may be our best moments.
The point is that the principal human battle, the main game, is the battle against the inexorability of entropy, and that is why globalism, for as long as this globe alone is our home, and humanism, as long as we see, as Darwin so clearly did, that our existence is due to, and dependent on, the evolutionary bush of living organisms on this planet, must be our highest priorities. William Faulkner famously expressed an expectation that humanity would prevail, but there’s nothing inevitable about it, and far from it, given the energy that needs to be constantly supplied to keep the consequences of the second law at bay. Perhaps the analogy of bacteria in a petri dish is just a little oversimplified – for a start, the nutrients in our particular petri dish have increased rather than diminished, thanks largely to human ingenuity. As a result, though the human population has increased seven-fold over the past 200 years, our per capita caloric intake has also increased. But of course there’s no guarantee that this will continue – and far from it.
One of the problems is being too smart for our own good, always arguably. In the early fifties, the Pacific, and Micronesia’s Marshall Islands in particular, was the scene of unprecedented damage and contamination as the USA tried to improve and perfect its new thermonuclear weaponry there. Not much concern was shown, of course, for the locals, not to mention the undersea life, at a time when the spectacular effects of the atom bombs on Japan had created both a global panic and a thrill about super-weaponry. The nuclear fusion weapons tested in that period dwarfed the Hiroshima bomb by many factors in terms of power and radioactive effects, and there was much misinformation even among experts about the extent of those effects. We were playing not just with fire, but with the most powerful and transformational energies in the universe, within a scant few decades of having discovered them. And today the USA, due to various accidents of history, has a nuclear arsenal of unfathomable destructive power, and a political system sorely in need of overhaul. With galloping developments in advanced AI, UAV technology and cyber hacking, it would be ridiculous to project complacent human triumphalism even a decade into the future, never mind into the era of terraforming other worlds.
Einstein famously said, at the dawn of the nuclear era, ‘everything has changed except our way of thinking’. Of course, ways of thinking are the most difficult things to change, and yet we have managed it to some extent. Even in the sixties, hawks in the US and other administration were talking up nuclear strikes, but apart from the buffoonish Trump and his counterpart Kim of North Korea – people we’re sadly obliged to take seriously – such talk is now largely redundant. After the horrors of two global conflicts, and through the growing realisation of our own destructive power, we’ve forced ourselves to think more globally and co-operatively. There’s actually no serious alternative. Having already radically altered the eco-system that has defied entropy for a blink of astronomical time, we’ll need all our co-operative energy to maintain the miracle that we’ve so recently learned so much about.
Why science?

why is it so?
Ever since I was a kid I was an avid reader. It was my escape from a difficult family situation and a hatred or fear of most of my teachers. I became something of a quiet rebel, rarely reading what I was supposed to read but always trying to bite off more than I could chew in terms of literature, history, and occasionally science. I did find, though, that I could chew almost anything – especially in literature and history. And I loved the taste. Science, though, was different. It certainly didn’t come naturally to me. I didn’t know any science buffs and in fact I had no mentors for any of my reading activities. We did have encyclopaedias, though, and my random reading turned up the likes of Einstein, Newton, Darwin, Pasteur and other Big Names in science. Of course I was more interested in their bios than in the nature of their exotic researches, but in my idealised view they seemed very pure in their quest for greater understanding of the material world. I sometimes wished I could be like them but mostly I just dived into ‘literature’, a more comfortable world in which ordinary lives were anatomised by high-brow authors like Austen, Eliot and James (I had a fetish for 19th century lit in my teens). I took silent pride in my critical understanding of these texts, it surely set me above my classmates, though I remember one day walking home with one of the smartest kids in my class, who regaled me with his exploration of the electronics of a transistor radio he was pulling apart at home. I remember trying to listen, half ashamed of my ignorance, half hoping to change the subject to something I could sound off about.
Later, having dropped out of my much-loathed school, I started hanging out, or trying to, with other school drop-outs in my working-class neighbourhood. I didn’t fit in with them to say the least, but the situation worsened when they began tinkering with or talking about cars, which held no interest for me. I was annoyed and impressed at how articulate they were about carbies, distributors and camshafts, and wondered if I was somehow wasting my life.
Into my twenties, living la vie boheme in punk-fashionable poverty among art students and amateur philosophers, I read and was definitely intrigued by Alan Chalmers’ unlikely best-seller What is this thing called science? It sparked a brief interest in the philosophy of science rather than science itself, but interestingly it was a novel that really set me to reading and trying to get my head around science – a big topic! – on a more or less daily basis. I was about 25 when I read Thomas Mann’s The Magic Mountain, in which Hans Castorp, a young man of about my age at the time, was sent off to an alpine sanatorium to be cured of tuberculosis. Thus began a great intellectual adventure, but it was the scientific explorations that most spoke to me. Wrapped up in his loggia, reading various scientific texts, Castorp took the reader on a wondering tour of the origin of life, and of matter itself, and it struck me that these were the key questions – if you want to understand yourself, you need to understand humanity, and if you want to understand humanity you need to understand life itself, and if you want to understand life, you need to understand the matter that life is organised from, and if you need to understand matter…
I made a decision to inform myself about science in general, via the monthly magazine, Scientific American, where I learned at least something about oncogenes, neutrinos and the coming AIDS epidemic, inter alia. I read my first wholly scientific book, Dawkins’ The Selfish Gene, and, as I was still living la vie boheme, I enjoyed the occasional lively argument with housemates or pub philosophers about the Nature of the Universe and related topics. In the years since I’ve read and half-digested books on astronomy, cosmology, palaeontology and of course the history of science in general. I’ve read The origin of species, Darwin’s Voyage of the Beagle and at least four biographies of Darwin, including the monumental biography by Adrian Desmond and James Moore. I’ve also read a biography of Alfred Russell Wallace, and more recently, Siddhartha Mukherjee’s The Gene, which traces the search for the cause of the random variation essential to the Darwin-Wallace theory. And I still read science magazines like Cosmos on a more or less daily basis.
These readings have afforded me some of the greatest pleasures of my life, which would, I suppose, be enough to justify them. But I should try to answer the why question. Why is science so thrilling? The answer, I hope, is obvious. It isn’t science that’s thrilling, it’s our world. I’m not a science geek, it doesn’t come easily to me. When, for example, a tech-head explains how an electronic circuit works, I have to watch the video many times over, look up terms, refer to related videos, etc, in order to fix it in my head, and then, like most people, I forget the vast majority of what I read, watch or listen to. But what keeps me going is a fascination for the world – and the questions raised. How did the Earth form? Where did the water come from? How is it that matter is electrical, full of charge? How did language evolve? How has our Earth’s atmosphere evolved? How are we related to bananas, fruit flies, australopithecines and bats? How does our microbiome relate to obesity? What can we expect from CRISPR/Cas9 editing technology? What’s the future for autonomous vehicles, brain-controlled drones and new-era smart phones?
This all might sound like gaga adolescent optimism, but I’m only cautiously optimistic, or maybe not optimistic at all, just fascinated about what might happen, on the upside and the downside. And I’m endlessly impressed by human ingenuity in discovering new things and using those discoveries in innovative ways. I’m also fascinated, in a less positive way, by the anti-scientific tendencies of conspiracy theorists, religionists, new-agers and those who identify with and seem trapped by ‘heavy culture’. Podcasts such as The Skeptics’ Guide to the Universe, Skeptoid and Australia’s The Skeptic Zone, as well as various science-based blogs like Why Evolution is True and Skeptical Science are fighting a seemingly never-ending fight against the misinformation churned out by passionate supporters of fixed non-evidence-based positions. But spending too much time arguing with such types does your head in, and I prefer trying to accentuate the positive than trying to eliminate the negative.
And on that positive side, exciting things are always happening, whether it’s battery technology, cancer research, exoplanetary discoveries, robotics or brain implants, more developments are occurring than any one person can keep abreast of.
So I’ll end with some positive and reassuring remarks about science. It’s not some esoteric activity to be suspicious of, but neither is it something easily definable. It’s not a search for the truth, it’s more a search for the best, most comprehensive, most consistent and productive explanation for phenomena. I don’t believe there’s such a thing as the scientific method – the methods of Einstein can’t easily be compared with those of Darwin. Methods necessarily differ with the often vast differences between the phenomena under investigation. Conspiracy theories such as the moon landings ‘hoax’ or the climate science ‘fraud’ would require that scientists and their ancillaries are incredibly disciplined, virtually robotic collaborators in sinister plots, rather than normal, questing, competitive, collaborative, inspired and inspiring individuals, struggling desperately to make sense and make breakthroughs. In the field of human health, scientists are faced with explaining the most complex organism we know of – the human body with its often perverse human mind. It’s not at all surprising that pseudo-science and quackery is so common in this field, in which everyone wants to live and thrive as long as possible. But we need to be aware that with such complexity we will encounter many false hopes and only partial solutions. The overall story, though, is positive – we’re living longer and healthier, in statistical terms, than ever before. The past, for the most part, is another country which we might like to briefly visit, but we wouldn’t want to live there. And science is largely to be thanked for that. So, why not science? The alternatives do nothing for me.

The SGU team – science nerds fighting the good fight
When was the first language? When was the first human?
Reading a new book of mine, Steven Pinker’s The sense of style, 2014, I was bemused by his casual remark on the first page of the first chapter, ‘The spoken word is older than our species…’. Hmmm. As Bill Bryson put it in A short history of nearly everything, ‘How do they know that?’. And maybe I should dispense with ‘they’ here – how does Pinker know that? My previous shallow research has told me that nobody knows when the first full-fledged language was spoken. Furthermore, we’re not sure about the first full-fledged human either. Was it mitochondrial Eve? But what about her mum? And her mum’s great-grandad? Which raises an old conundrum, one that very much exercised Darwin, and which creationists today love to make much of, the conundrum of speciation.
Recently, palaeontologists discovered human-like remains that might be 300,000 years old in a Moroccan cave. Or, that’s the story as I first heard it. Turns out they were discovered decades ago and dated at about 40,000 years, though some of their features didn’t match with that age. They’ve been reanalysed using thermoluminescense dating, a complicated technique involving measuring light emitted from escaping electrons (don’t ask). No doubt the dating findings will be disputed, as well as findings about just how human these early humans – about 100,000 years earlier than the usual Ethiopian suspects – really are. It’s another version of the lumpers/splitters debate, I suspect. It’s generally recognised that the Moroccan specimens have smaller brains than those from Ethiopia, but it’s not necessarily the case that they’re direct ancestors, proof that there was a rapid brain expansion in the intervening period.
Still there’s no doubt that the Moroccan finding, if it holds up, is significant, as at the very least it pushes back findings on the middle Stone Age, when the making of stone blades began, according to Ian Tattersall, the curator emeritus of human origins at the American Museum of Natural History. But as to tracing our ancestry back to ‘the first humans’, we just can’t do this at present, we can’t join the dots because we have far too few dots to join. It’s a question whether we’ll ever have enough. Evolution isn’t just gradual, it’s divergent, bushy. Where does Homo naledi, dated to around 250,000 years ago, fit into the picture? What about the Denisovans?
Meanwhile, new research and technologies continue to complicate the picture of humans and their ancestors. It’s been generally accepted that the last common ancestor of chimps and humans lived between 5 and 7 million years ago in Africa, but a multinational team of researchers has cast doubt on the assumption of African origin. The research focused on dental structures in two specimens of the fossil hominid Graecopithecus freybergi, found in Greece and Bulgaria. They found that the roots of their premolars were partially fused, making them similar to those of the human lineage, from Ardepithecus and Australopithecus to modern humans. These fossils date to around 7.2 million years ago. It’s conjectured that the possible placing of the divergence further north than has previously been hypothesised has much to do with environmental factors of the time. So, okay, African conditions were more northerly in those days…
So these new findings and new dating techniques are adding to the picture without clarifying it much, as yet. They’re like tiny pieces in a massive jigsaw puzzle, gradually accumulating, sometimes shifted to places of better fit, and so tantalisingly offering new perspectives on what the whole history might look like. I can imagine that in this field, as in so many others, researchers are chafing against their own mortality, as they yearn for a clearer, more comprehensive future view.
Meanwhile, speculations continue. Colin Barras offers his own in a recent New Scientist article, in which he considers the spread of H sapiens in relation to H naledi and H floresiensis. The 1800 or so H naledi fossil bones, discovered in a South African cave four years ago by a team of researchers led by Lee Berger, took a while to be reliably dated to around 250,000 years (give or take some 50,000), just a bit earlier than the most reliably dated H sapiens (though that may change). Getting at a precise age for fossils is often difficult and depends on many variables, in particular the surrounding rock or sediment, and many researchers were opting for a much earlier period on the evidence of the specimens themselves – their small brain size, their curved fingers and other formations. But if the most recent dating figure is correct (and there’s still some doubt) then, according to Barras, it just might be that H sapiens co-existed, in time and place, with these more primitive hominids, and outcompeted them. And more recent dating of H floresiensis, those isolated (so far as we currently know) hominids from the Indonesian island of Flores, has ruled out that they lived less than 50,000 years ago, so their extinction, again, may have coincided with the spread of all-conquering H sapiens. Their remote island location may explain their survival into relatively recent times, but their ancestry is very much in dispute. A recent, apparently comprehensive analysis may have solved the mystery however. It suggests H floresiensis descended from an undiscovered ancestor that left Africa over 2 million years ago. Those who stayed put evolved into H habilis, the first tool makers. Those who left may have reached the Flores region more than 700,000 years ago. The analysis is based on detailed comparisons with many other hominid species and earlier ancestors.
I doubt there will ever be agreement on the first humans, or a very precise date. We’re not so easily defined. But what about the first language? Is it confined to our species?
Much of the speculation on this question focuses on our Neanderthal cousins as the most likely candidates. Researchers have examined the Neanderthal throat structure as far as possible (soft tissue doesn’t fossilise, which is a problem), and have found one intriguing piece of evidence that makes Neanderthal speech plausible. The semi-circular hyoid bone is located high in the human throat, and is found in the same place in the Neanderthal throat. Given that this bone is differently placed in the throat of our common ancestors, this appears to be an example of convergent evolution. We don’t know the precise role of the hyoid in speech, but it certainly affects the space of the throat, and its flexible relationship to other bones and signs of its ‘intense and constant activity’ are suggestive of a role in language. Examination of the hyoids of other hominids suggests that a rudimentary form of language may go back at least 500,000 years, but this is far from confirmed. It’s probable that language underwent a more rapid development between 75,000 and 50,000 years ago. It’s also worth noting that a full-fledged language doesn’t depend on speech, as signing proves. It may be that a more or less sophisticated gestural system preceded spoken language.

a selection of primate hyoid bones
Of course there’s an awful lot more to say on the origin of language, even if much of it’s highly speculative. I plan to watch all the best videos and online lectures on the subject, and I’ll post about it again soon.
References
https://www.sciencedaily.com/releases/2017/05/170523083548.htm
https://www.vox.com/science-and-health/2017/6/7/15745714/nature-homo-sapien-remains-jebel-irhoud
how evolution was proved to be true
The origin of species is a natural phenomenon
Jean-Baptiste Lamarck
The origin of species is an object of inquiry
Charles Darwin
The origin of species is an object of experimental investigation
Hugo de Vries
(quoted in The Gene: an intimate history, by Siddhartha Mukherjee)

Gregor Mendel
I’ve recently read Siddhartha Mukherjee’s monumental book The Gene: an intimate history, a work of literature as well as science, and I don’t know quite where to start with its explorations and insights, but since, as a teacher to international students some of whom come from Arabic countries, I’m occasionally faced with disbelief regarding the Darwin-Wallace theory of natural selection from random variation (usually in some such form as ‘you don’t really believe we come from monkeys do you?’), I think it might be interesting, and useful for me, to trace the connections, in time and ideas, between that theory and the discovery of genes that the theory essentially led to.
One of the problems for Darwin’s theory, as first set down, was how variations could be fixed in subsequent generations. And of course another problem was – how could a variation occur in the first place? How were traits inherited, whether they varied from the parent or not? As Mukherjee points out, heredity needed to be both regular and irregular for the theory to work.
There were few clues in Darwin’s day about inheritance and mutation. Apart from realising that it must have something to do with reproduction, Darwin himself could only half-heartedly suggest an unoriginal notion of blending inheritance, while also leaning at times towards Lamarckian inheritance of acquired characteristics – which he at other times scoffed at.
Mukherjee argues here that Darwin’s weakness was impracticality: he was no experimenter, though a keen observer. The trouble was that no amount of observation, in Darwin’s day, would uncover genes. Even Mendel was unable to do that, at least not in the modern DNA sense. But in any case Darwin lacked Mendel’s experimental genius. Still, he did his best to develop a hypothesis of inheritance, knowing it was crucial to his overall theory. He called it pangenesis. It involved the idea of ‘gemmules’ inhabiting every cell of an organism’s body and somehow shaping the varieties of organs, tissues, bones and the like, and then specimens of these varied gemmules were collected into the germ cells to produce ‘mixed’ offspring, with gemmules from each partner. Darwin describes it rather vaguely in his book The Variation of Animals and Plants under Domestication, published in 1868:
They [the gemmules] are collected from all parts of the system to constitute the sexual elements, and their development in the next generation forms the new being; but they are likewise capable of transmission in a dormant state to future generations and may then be developed.
Darwin himself admitted his hypothesis to be ‘rash and crude’, and it was effectively demolished by a very smart Scotsman, Fleeming Jenkin, who pointed out that a trait would be diluted away by successive unions with those who didn’t have it (Jenkin gave as an example the trait of whiteness, i.e. having ‘white gemmules’, but a better example would be that of blue eyes). With an intermingling of sexual unions, specific traits would be blended over time into a kind of uniform grey, like paint pigments (think of Blue Mink’s hit song ‘Melting Pot’).
Darwin was aware of and much troubled by Jenkin’s critique, but he (and the scientific world) wasn’t aware that a paper published in 1866 had provided the solution – though he came tantalisingly close to that awareness. The paper, ‘Experiments in Plant Hybridisation’, by Gregor Mendel, reported carefully controlled experiments in the breeding of pea plants. First Mendel isolated ‘true-bred’ plants, noting seven true-bred traits, each of which had two variants (smooth or wrinkled seeds; yellow or green seeds; white or violet coloured flowers; flowers at the tip or at the branches; green or yellow pods; smooth or crumpled pods; tall or short plants). These variants of a particular trait are now known as alleles.
Next, he began a whole series of painstaking experiments in cross-breeding. He wanted to know what would happen if, say, a green-podded plant was crossed with a yellow-podded one, or if a short plant was crossed with a tall one. Would they blend into an intermediate colour or height, or would one dominate? He was well aware that this was a key question for ‘the history of the evolution of organic forms’, as he put it.
He experimented in this way for some eight years, with thousands of crosses and crosses of crosses, and the more the crosses multiplied, the more clearly he found patterns emerging. The first pattern was clear – there was no blending. With each crossing of true-bred variants, only one variant appeared in the offspring – only tall plants, only round peas and so on. Mendel named them as dominant traits, and the non-appearing ones as recessive. This was already a monumental result, blowing away the blending hypothesis, but as always, the discovery raised as many questions as answers. What had happened to the recessive traits, and why were some traits recessive and others dominant?
Further experimentation revealed that disappeared traits could reappear in toto in further cross-breedings. Mendel had to carefully analyse the relations between different recessive and dominant traits as they were cross-bred in order to construct a mathematical model of the different ‘indivisible, independent particles of information’ and their interactions.
Although Mendel was alert to the importance of his work, he was spectacularly unsuccessful in alerting the biological community to this fact, due partly to his obscurity as a researcher, and partly to the underwhelming style of his landmark paper. Meanwhile others were aware of the centrality of inheritance to Darwin’s evolutionary theory. The German embryologist August Weismann added another nail to the coffin of the ‘gemmule’ hypothesis in 1883, a year after Darwin’s death, by showing that mice with surgically removed tails – thus having their ‘tail gemmules’ removed – never produced tail-less offspring. Weismann presented his own hypothesis, that hereditary information was always and only passed down vertically through the germ-line, that’s to say, through sperm and egg cells. But how could this be so? What was the nature of the information passed down, information that could contain stability and change at the same time?
The Dutch botanist Hugo de Vries, inspired by a meeting with Darwin himself not long before the latter’s death, was possessed by these questions and, though Mendel was completely unknown to him, he too looked for the answer through plant hybridisation, though less systematically and without the good fortune of hitting on true-breeding pea plants as his subjects. However, he gradually became aware of the particulate nature of hereditary information, with these particles (he called them ‘pangenes’, in deference to Darwin’s ‘pangenesis’), passing down information intact through the germ-line. Sperm and egg contributed equally, with no blending. He reported his findings in a paper entitled Hereditary monstrosities in 1897, and continued his work, hoping to develop a more detailed picture of the hereditary process. So imagine his surprise when in 1900 a colleague sent de Vries a paper he’d unearthed, written by ‘a certain Mendel’ from the 1860s, which displayed a clearer understanding of the hereditary process than anyone had so far managed. His response was to rush his own most recent work into press without mentioning Mendel. However, two other botanists, both as it happened working with pea hybrids, also stumbled on Mendel’s work at the same time. Thus, in a three-month period in 1900, three leading botanists wrote papers highly indebted to Mendel after more than three decades of profound silence.

Hugo de Vries
The next step of course, was to move beyond Mendel. De Vries, who soon corrected his unfair treatment of his predecessor, sought to answer the question ‘How do variants arise in the first place?’ He soon found the answer, and another solid proof of Darwin’s natural selection. The ‘random variation’ from which nature selected, according to the theory, could be replaced by a term of de Vries’ coinage, ‘mutation’. The Dutchman had collected many thousands of seeds from a wild primrose patch during his country rambles, which he planted in his garden. He identified some some 800 new variants, many of them strikingly original. These random ‘spontaneous mutants’, he realised, could be combined with natural selection to create the engine of evolution, the variety of all living things. And key to this variety wasn’t the living organisms themselves but their units of inheritance, units which either benefitted or handicapped their offspring under particular conditions of nature.
The era of genetics had begun. The tough-minded English biologist William Bateson became transfixed on reading a later paper of de Vries, citing Mendel, and henceforth became ‘Mendel’s bulldog’. In 1905 he coined the word ‘genetics’ for the study of heredity and variation, and successfully promoted that study at his home base, Cambridge. And just as Darwin’s idea of random variation sparked a search for the source of that variation, the idea of genetics and those particles of information known as ‘genes’ led to a worldwide explosion of research and inquiry into the nature of genes and how they worked – chromosomes, haploid and diploid cells, DNA, RNA, gene expression, genomics, the whole damn thing. We now see natural selection operating everywhere we’re prepared to look, as well as the principles of ‘artificial’ or human selection, in almost all the food we eat, the pets we fondle, and the superbugs we try so desperately to contain or eradicate. But of course there’s so much more to learn….

William Bateson
three problems with Islamic society, moderate or otherwise
As a teacher of English to foreign students, I have a lot of dealing with, mostly male, Moslems. I generally get on very well with them. Religion doesn’t come up as an issue, any more than with my Chinese or Vietnamese students. I’m teaching them English, after all. However, it’s my experience of the views of a fellow teacher, very much a moderate Moslem, that has caused me to write this piece, because those views seem to echo much that I’ve read about online and elsewhere.
- Homosexuality
It’s well known that in such profoundly Islamic countries as Saudi Arabia and Afghanistan, there’s zero acceptance of homosexuality, to the point of claiming it doesn’t exist in those countries. Its ‘non-existence’ may be due to that fact that its practice incurs the death penalty (in Saudia Arabia, Yemen, Mauritania, Iran and Sudan), though such penalties are rarely carried out – except, apparently, in Iran. Of course, killing people in large numbers would indicate that there’s a homosexual ‘problem’. In other Moslem countries, homosexuals are merely imprisoned for varying periods. And lest we feel overly superior, take note of this comment from a very informative article in The Guardian:
Statistics are scarce [on arrests and prosecutions in Moslem countries] but the number of arrests is undoubtedly lower than it was during the British wave of homophobia in the 1950s. In England in 1952, there were 670 prosecutions for sodomy, 3,087 for attempted sodomy or indecent assault, and 1,686 for gross indecency.
This indicates how far we’ve travelled in a short time, and it also gives hope that other nations and regions might be swiftly transformed, but there’s frankly little sign of it as yet. Of course the real problem here is patriarchy, which is always and everywhere coupled with homophobia. It’s a patriarchy reinforced by religion, but I think if we in the west were to try to put pressure on these countries and cultures, I think we’d succeed more through criticising their patriarchal attitudes than their religion.
Having said this, it just might be that acceptance of homosexuality among liberal Moslems outside of their own countries (and maybe even inside them) is greater than it seems to be from the vibes I’ve gotten from the quite large numbers of Moslems I’ve met over the years. A poll taken by the Pew Research Centre has surprised me with its finding that 45% of U.S. Moslems accept homosexuality (in 2014, up from 38% in 2007), more than is the case among some Christian denominations, and the movement towards acceptance aligns with a trend throughout the U.S. (and no doubt all other western nations), among religious and non-religious alike. With greater global communication and interaction, the diminution of poverty and the growth of education, things will hopefully improve in non-western countries as well.
2. Antisemitism and the Holocaust
I’ve been shocked to hear, more than once, Moslems blithely denying, or claiming as exaggerated, the events of the Holocaust. This appears to be a recent phenomenon, which obviously bolsters the arguments of many Middle Eastern nations against the Jewish presence in their region. However, it should be pointed out that Egypt’s President Nasser, a hero of the Moslem world, told a German newspaper in 1964 that ‘no person, not even the most simple one, takes seriously the lie of the six million Jews that were murdered [in the Holocaust]’. More recently Iran has become a particular hotspot of denialism, with former President Ahmadinejad making a number of fiery speeches on the issue. Most moderate Islamic organisations, here and elsewhere in the west, present a standard line that the Shoah was exactly as massive and horrific as we know it to be, but questions are often raised about the sincerity of such positions, given the rapid rise of denialism in the Arab world. Arguably, though, this denialism isn’t part of standard anti-semitism. Responding to his own research into holocaust denialism among Israeli Arabs (up from 28% in 2006 to 40% in 2008), Sammy Smooha of Haifa University wrote this:
In Arab eyes disbelief in the very happening of the Shoah is not hate of Jews (embedded in the denial of the Shoah in the West) but rather a form of protest. Arabs not believing in the event of Shoah intend to express strong objection to the portrayal of the Jews as the ultimate victim and to the underrating of the Palestinians as a victim. They deny Israel’s right to exist as a Jewish state that the Shoah gives legitimacy to. Arab disbelief in the Shoah is a component of the Israeli-Palestinian conflict, unlike the ideological and anti-Semitic denial of the Holocaust and the desire to escape guilt in the West.
This is an opinion, of course, and may be seen as hair-splitting with respect to anti-semitism, but it’s clear that these counterfactual views aren’t helpful as we try to foster multiculturalism in countries like Australia.They need to be challenged at every turn.

Amcha, the Coalition for Jewish Concerns holds a rally in front of the Iranian Permanent Mission to the United Nations in response to Iranian President Mahmoud Ahmadinejad’s threats against Isreal and denial of the Holocaust, Monday, March 13, 2006 in New York. (AP Photo/Mary Altaffer)
3. Evolution
While the rejection, and general ignorance, of the Darwin-Wallace theory of evolution – more specifically, natural selection from random variation – may not be the most disturbing feature of Islamic society, it’s the one that most nearly concerns me as a person keen to promote science and critical thinking. I don’t teach evolution of course, but I often touch on scientific topics in teaching academic English. A number of times I’ve had incredulous comments on our relationship to apes (it’s more than a relationship!), and as far as I can recall, they’ve all been from Moslem students. I’ve also come across various websites over the years, by Moslem writers – often academics – from Turkey, India and Pakistan whose anti-evolution and anti-Darwin views degenerate quickly into fanatical hate-filled screeds.
I won’t go into the evidence for natural selection here, or an explanation of the theory, which is essential to all of modern biology. It’s actually quite complex when laid out in detail, and it’s not particularly surprising that even many non-religious people have trouble understanding it. What bothers me is that so many Moslems I’ve encountered don’t make any real attempt to understand the theory, but reject it wholesale for reasons not particularly related to the science. They’ve used the word ‘we’ in rejecting it, so that it’s impossible to even get to first base with them. This raises the question of the teaching of evolution in Moslem schools (and of course, not just Moslem schools), and whether and how much this is monitored. One may argue that non-belief in evolution, like belief in a flat earth or other specious ways of thinking, isn’t so harmful given a general scientific illiteracy which hasn’t stopped those in the know from making great advances, but it’s a problem when being brought up in a particular culture stifles access to knowledge, and even promotes a vehement rejection of that knowledge. We need to get our young people on the right page not in terms of a national curriculum but an evidence-based curriculum for all. Evidence has no national boundaries.
Conclusion – the problem of identity politics
The term identity politics is used in various ways, but I feel quite clear about my own usage here. It’s when your identity is so wrapped up in a political or cultural or religious or class or caste or professional grouping, that it trumps your own independent critical thinking and analysis. The use of ‘we think’ or ‘we believe’, is the red flag for these attitudes, but of course this usage isn’t always overt or conscious. The best and probably only way to deal with this kind of thinking is through constructive engagement, drawing people out of the groupthink intellectual ghetto through argument, evidence and invitations to reconsider (or consider for the first time) and if that doesn’t work, firmness regarding the evidence-based view together with keeping future lines of communications open. They say you should keep your friends close and your enemies closer, and it’s a piece of wisdom that works on a pragmatic and a humane level. And watch out for that firmness, because the evidence is rarely fixed. Education too is important. As an educator, I find that many students are open to the knowledge I have to offer, and are sometimes animated and inspired by it, regardless of their background. The world’s an amazing place, and students can be captivated by its amazingness, if it’s presented with enthusiasm. That can lead to explorations that can change minds. Schools are, or can be, places where identity politics can fragment as peers from different backgrounds can converge and clash, sometimes in a constructive way. We need to watch for and combat the echo-chamber effect of social media, a new development that often reinforces false and counter-productive ideas – and encourages mean-spirited attacks on faceless adversaries. Breaking down walls and boundaries, rather than constructing them, is the best solution. Real interactions rather than virtual ones, and thinking about the background and humanity of the other before leaping into the fray (I’m beginning to sound saintlier than I’ve ever really been – must be the Ha Ji-won influence!)
touching on the complex causes of male violence
A bout of illness and a general sense of despair about blogging has prevented me from posting here for a while. For my health and well-being I’ll try to get back on track. So here’s a brief post on my hobbyhorse of the moment.
It surprises me that people could try to argue with me about the violence of men compared to women, trying to explain it away in terms of physical size – I mean, really? And then, when this doesn’t fly, they point to individuals of established combativeness, the Iron Lady, Golda Meir, and why not mention Boadicea, or [place name of fave female serial killer here]?
And it really demoralises me when this argumentative cuss is a woman. I mean I love a feisty female but really…
It reminds me of a scenario from my not-so-youth, when I briefly hung out with a perverse young lass who insisted with unassailable feistiness that men were clearly more intelligent than women (by and large, presumably). It certainly made be wonder at how intelligence could be turned against itself. But was it intelligence, or something else?
But let’s get back to reality. Men are more violent than women in every country and every culture on the planet. This is a statistical fact, not a categorical, individual claim. Of course there are violent women and much less violent men. That isn’t the point. The point is that you cannot sheet this home to sexual dimorphism. Two examples will suffice. First, look at death and injury by road accident in the west – in countries where both men and women are permitted to drive. The number of males killed in road accidents is considerably higher than females in every western country. In Australia males are almost two and a half times more likely to die this way than females, and in some countries it’s more, but it’s everywhere at least double. The WHO has a fact sheet On this, updated in November 2016:
From a young age, males are more likely to be involved in road traffic crashes than females. About three-quarters (73%) of all road traffic deaths occur among men. Among young drivers, young males under the age of 25 years are almost 3 times as likely to be killed in a car crash as young females.
The second example is youth gangs, including bikie gangs. These are, obviously, predominantly male, their purpose is usually to ‘display manhood’ in some more or less brutal way, and, again obviously, they can’t be explained away in terms of size difference. Other causes need to be considered and studied, and of course, they have been. Some of these causes are outlined in Konner’s book, but I can’t detail them here because I’ve lent the book out (grrr). An interesting starting point for thinking about the social causes of male violence is found in a short essay by Jesse Prinz here. Prinz largely agrees with Konner on the role of agricultural society in sharpening the male-female division in favour of males, but I think he oversimplifies the differences in his tendency to apply social explanations, and he says nothing about gene expression and hormonal factors, which Konnor goes into in great detail. It seems to me that Prinz’s line of reasoning would not be able to account for the reckless, life-threatening behaviour of young male drivers, for example. While there is clearly something social going on there, I would contend that something biological is also going on. Or something in the biological-social nexus, if you will. Clearly, it’s a very complex matter, and if we can uncover hormonal or neurotransmissional causes, that doesn’t rule out social factors playing a regulatory role in those causes. Social evolution, we’re finding, can change biology much more quickly than previously thought.