an autodidact meets a dilettante…

‘Rise above yourself and grasp the world’ Archimedes – attribution

Archive for the ‘science’ Category

how to define a planet: the problematic case of Pluto

leave a comment »

Pluto, with its ‘heart-shaped’ area known as Sputnik Planitia, imaged by New Horizons, July 14 2015

A while back I listened to a podcast from Point of Inquiry, in which two planetary scientists, Alan Stern and David Grinspoon, involved in NASA’s New Horizons mission to Pluto, were separately interviewed, and were inevitably asked about Pluto’s demotion from planet status. Having not followed this issue, I was surprised at the response. So it’s time to take a closer look.

Of course I should be writing ecstatically about the New Horizons mission, not to mention those of Juno, Cassini, Mars’ Curiosity and so forth, and hopefully that will come, but the controversy about Pluto immediately struck me, as I thought, in my naïveté, that its demotion was a consensual thing amongst astronomers, with only the ignoroscenti (my neologism) left to mourn the fact (not that I mourned it particularly – Pluto still existed after all, and it didn’t care a jot what we thought of it).

Pluto, discovered by Clyde Tombaugh in 1930, was accepted as the ninth and final planet in our solar system for decades until the nineties, when another Kuiper belt object was discovered (besides Charon, Pluto’s large moon), and the Kuiper belt itself became a thing, in fact a massive thing, far bigger than the ‘familiar’ asteroid belt between Mars and Jupiter. We now know of more than 1000 kuiper belt objects, with at least 100,000 believed to exist. The Kuiper belt is widely spread out from the orbit of Neptune, and though Pluto is its largest and brightest object, it’s not the most massive. Presumably it’s for this reason that Pluto was demoted – what with the scattered disc and the Oort cloud there seemed to suddenly be a host of objects that could be included as planets, so it was thought better to exclude Pluto, or to demote it to dwarf planet status, presumably along with other assorted Kuiper belt objects (KBOs), rocks and iceballs that were worthy of the designation. That seemed okay to my thoughtless mind, but here’s what Alan Stern had to say on the subject:

Well, you know, we don’t really honour that classification in planetary science, that was really done by a group of different astronomers who don’t know much about planets. Let me give you a technical term, we call it BS. You know what BS stands for don’t you? Bad Science. Now you wouldn’t ask a podiatrist, a foot doctor, to help you if you had a cardiovascular problem with your heart, that’d be the wrong expertise, though they’re both doctors you’d be going for a cardiologist. And if you had a real estate problem you probably wouldn’t go to a divorce attorney, even though they’re both attorneys. In the space field we have many professions, we have engineering professions, we have many different scientific specialties, etc. Astronomers really don’t know much about planets any more than I’m an expert in black holes in faraway galaxies. They had a little meeting in 2006, they were worried that school children would have to memorise the names of too many planets, so they wrote a definition that limited the number of planets to eight. Now, right after that, Ira Flatow called me up on Science Friday and said, would you debate Mike Brown, who was one of the proponents of ‘let’s limit the planets to eight’, and I said, sure, and we got on the phone and it’s Science Friday live, and Mike Brown makes his case and says, ‘look we just can’t have 50 planets, it’s too many to remember.’ Now, I found that anti-scientific, it seems like engineering the definition, versus letting it inform you, but Ira said, Alan what’d you think, ‘can’t have 50 planets’, what d’you say back to MIke? I said, ‘well if you can’t have 50 planets then we’re probably going to have to go back to eight states, I guess’. And he was speechless…

I love that story – though no doubt Mike Brown would’ve told a different one. So let’s turn Stern’s objection into an inquiry. Was it scientifically correct/accurate/fair to reclassify Pluto as a dwarf/minor planet?

Happily I just happened to listen to a podcast of the Skeptics’ Guide a few days later, which has led me to a more detailed piece on Steven Novella’s Neurologica blog on the Pluto controversy. Apparently, in the above-mentioned 2006 meeting they decided that to be classified as a planet, a body in our solar system should meet 3 criteria:

  • it has to orbit the sun
  • it has to be spheroid (i.e. have the mass to be so, due to its gravity),
  • it must have cleared its orbit of other objects.

Now this third criteria immediately seems the dodgiest, as it sounds like it’s designed to eliminate any KBOs. And how do we know an orbit is cleared? After all, one day, a comet or asteroid may strike us, because our orbits have coincided this time around. And why is that third criterion even important?

Novella cites a recent paper by planetary scientist Phillip Metzger who argues that the third criterion is invalid and that nothing about a body’s orbit should be in the definition since orbits can alter due to external influences. Only characteristics intrinsic to the body should be included in the definition. This would essentially leave one criterion standing – that of sphericity. And even then, how sphere-like does a planet have to be? Another ‘problem’ with Metzger’s definition is that it would include moons, such as our own, and many others. Novella has his own classifying suggestion, which sounds promising to me:

We keep criteria “a” and “b” and drop “c”. However, we add that the object must not be in a subservient orbit around a larger object. What does that mean? If two objects, like the Earth and Moon, are in orbit around each other, and the center of gravity (barycenter) lies beneath the surface of one of the bodies, then the smaller object will be said to orbit the larger object, and is a moon. Therefore Europa, which is large enough by itself to be a planet, would instead be considered a moon because it orbits Jupiter.

I need to further explain the term ‘barycentre’, for my own sake. Think of two bodies in gravitational relationship to each other. Inevitably, one of them will be more massive, and will exert a greater gravitational force. An obvious case is the Earth and the Moon. Between the two there is a point, the ‘centre of gravity’, or barycentre,  around which the two bodies revolve, but because the Earth is a lot more massive that the Moon and they’re relatively close to each other, that barycentre is actually close enough to the Earth’s centre to be within the mass of the Earth, with the result that only the moon revolves. The Earth, though, is very much affected by the Moon’s gravitational field, which causes a slight wobble as well as tidal effects on the Earth’s surface. 

Interestingly, Novella’s reclassification would include Charon, Pluto’s ‘moon’, as a planet (as well as Pluto of course) because its size relative to Pluto puts the barycentre at a point between the two bodies, rather than within Pluto. So Pluto-Charon would be reclassified as a binary-planet system. It would also promote Ceres, in the asteroid belt, and Eris and Makemake, two recently discovered Kuiper belt objects, to planetary status. That takes the current eight up to thirteen, with others yet to be discovered. 

It’s unlikely of course that the astronomical overlords who reclassified Pluto would be swayed by any mere outsider’s view, however well-reasoned, but this examination of the issue is a reminder of just how dubious the reasoning of ‘experts’ can be, and how important it is to question that reasoning. Size apparently does matter to these guys, but this new category of ‘dwarf’ or ‘minor’ planet seems inherently unstable, and will probably become even more so as the number of discovered exoplanets increases. Will it be mass or volume that’s the decider, and what will be the mass or volume that decides? And does it really matter? It’s only nomenclature after all. And yet… The difference between an asteroid and a comet is important, is it not? And so is the difference between a planet and an asteroid. And so is the difference between a moon and a planet. And so… is it not? 

Written by stewart henderson

October 14, 2018 at 1:09 pm

more about ozone, and the earth’s greatest extinction event

leave a comment »

the Siberian Traps are layers of flood basalt covering an area of 2 million square kilometres

Ozone, or trioxygen (O3), an unstable molecule which is regularly produced and destroyed by the action of sunlight on O2, is a vital feature in our atmosphere. It protects life on earth from the harmful effects of too much UV radiation, which can contribute to skin cancers in humans, and genetic abnormalities in plant life. In a previous post I wrote about the discovery of the ozone shield, and the hole above Antarctica, which we seem to be reducing – a credit to human global co-operation. In this post I’m going to try and get my head around whether or not ozone depletion played a role in the so-called end-Permian extinction of some 250 mya. 

I first read of this theory in David Beerling’s 2009 book The emerald planet, but recent research appears to have backed up Beerling’s scientific speculations – though speculation is too weak a word. Beerling is a world-renowned geobiologist and expert on historical global climate change. He’s also a historian of science, and in ‘An ancient ozone catastrophe?’, chapter 4 of The emerald planet, he describes the discovery and understanding of ozone through the research of Robert Strutt, Christian Schönbein, Marie Alfred Cornu, Walter Hartley, George Dobson, Sidney Chapman and Paul Crutzen, among others. He goes on to describe the ozone hole discovery in the 70s and 80s, before focusing on research into the possible effects of previous events – the Tunguska asteroid strike of 1908, the Mount Pinatubo eruption of 1991 and others – on atmospheric ozone levels, and then homes in on the greatest extinction event in the history of our planet – the end-Permian mass extinction, ‘the Great Dying’, which wiped out some 95% of all species then existing.

According to Beerling, it was an international team of palaeontologists led by Henk Visscher at the University of Utrecht who first made the claim that stratospheric ozone had substantially reduced in the end-Permian. They hypothesised that, due to the greatest volcanic eruptions in Earth history, which created the Siberian Traps (layers of solidified basalt covering a huge area of northern Russia), huge deposits of coal and salt, the largest on Earth, were disrupted:

The widespread heating of these sediments and the action of hot groundwater dissolving the ancient salts, was a subterranean pressure cooker synthesising a class of halogenated compounds called organohalogens, reactive chemicals that can participate in ozone destruction. And in less than half a million years, this chemical reactor is envisaged to have synthesised and churned out sufficiently large amounts of organohalogens to damage the ozone layer worldwide to create an intense increased flux of UV radiation.

However, Beerling questions this hypothesis and considers that it may have been the eruptions themselves, which lasted 2 million years and occurred at the Permian-Triassic boundary 250-252 mya, rather than their impact on salt deposits, that did the damage. There’s evidence that many of the eruptions originated from as deep as 10 kilometres below the surface, injected explosively enough to reach the stratosphere, and that these plumes contained substantial amounts of chlorine. 

More recent research, published this year, has further substantiated Visscher’s team’s finding regarding genetic mutations in ancient conifers and lycopsids, and their probable connection with UV radiation enabled by ozone destruction. The mutations were global and dated to the same period. Laboratory experiments exposing related modern plants to bursts of UV radiation have produced more or less identical spore mutations.

The exact chain of events linking the eruptions to the ozone destruction have yet to be worked out, and naturally there’s a lot of scientific argy-bargy going on, but the whole story, even considering that it occurred so far in the past is a reminder of the fragility of that part of our planet that most concerns us – the biosphere. The eruptions clearly altered atmospheric chemistry and temperature. Isotopic measurements of oxygen in sea water suggest that equatorial waters reached more than 40°C. As can be imagined, this had killer effects on multiple species. 

So, we’re continuing to gain knowledge on the ozone shield and its importance, and fragility. I don’t know that there are too many ozone hole skeptics around (I don’t want to look too hard), but if we could only get the same kind of apparent near-unanimity with regard to anthropogenic global warming, that would be great progress. 

Written by stewart henderson

October 10, 2018 at 3:15 pm

about ozone, its production and depletion

with one comment

an Arctic polar stratospheric cloud, photographed in Sweden (filched from a website of NOAA’s Earth System Research Laboratory)

People will remember the ‘hole in the ozone’ issue that came up in the eighties I think, and investigators found that it was all down to CFCs, which were quite quickly banned, and then everything was hunky dory….

Or that’s how I vaguely recall it. Time to take a much closer look. 

I take my cue from ‘An ancient ozone catastrophe?’, chapter 4 of David Beerling’s The emerald planet, in which he looks at the evidence for a previous ozone disaster and its possible relation to the great Permian extinction of 252 millions years ago. I’ll probe into that matter in another post. In this post I’ll try to answer some more basic questions – what is ozone, where is the ozone layer and why does it have a hole in it?

Ozone is also known as trioxygen, which gives a handy clue to its structure. Oxygen can exist in different allotropes or molecular structures which are more or less stable. O3, ozone, is much less stable than O2 and has a very pungent chlorine-like odour and a pale blue colour. It’s present in minute quantities throughout the atmosphere but is most concentrated in the lower part of the stratosphere, 20 to 30 kilometres above the Earth’s surface. This region is called the ozone layer, or ozone shield, though it’s still not particularly dense with ozone, and that density varies geographically and seasonally. Ozone’s instability means that it doesn’t last long, and has to be replenished continually.

In 1928 chlorofluorocarbons (CFCs) were developed as a seemingly safe form of refrigerant, which, under patent as Freon, came to be used in air-conditioners, fridges, hair-sprays and a variety of other products. As it turned out, these CFCs aren’t so harmless when they reach the upper atmosphere, where the chlorine reacts with ozone to form chlorine monoxide (ClO), and regular O2. This reaction is activated by ultraviolet radiation, which then breaks up the unstable ClO, leaving the chlorine to react with more ozone in a continuing cycle.

By the eighties, it had become clear that something was going wrong with the ozone layer. Studies revealed that a gigantic hole in the layer had opened up over Antarctica, and without going into detail, CFCs were found to be largely responsible. There was the usual fight with vested business interests, but in 1987 the Montreal protocol against the use of ozone-depleting substances (ODS) was drawn up, a landmark agreement which has been successful in starting off the long and far from completed process of repair of the ozone shield.

As a very effective oxidant, ozone has many commercial applications, but the same oxidising property makes it a danger to plant and animal tissue. Much better for us to keep most of it up above the troposphere, where its ability to absorb UV radiation has made it virtually essential for maintaining healthy life on Earth’s surface. 

So here are some questions. Why does ozone proliferate particularly at the top of the troposphere, in the lower stratosphere? If it’s so reactive, how does it maintain itself at a particular rate? Has the thinning or reduction of that layer seriously influenced life on Earth in the past? From my reading, mainly of Beerling, I think I can answer the first two questions. The third question, which Beerling explores in the above-mentioned chapter of his book, is more speculative, and more interesting. 

Sidney Chapman, a brilliant geophysicist and mathematician of the early twentieth century, essentially answered the first question. He realised that ozone was both formed and destroyed by the action of sunlight, specifically UV radiation, on atmospheric oxygen. He calculated that this action would reduce and finally stop at a point approximately 15 km above sea level, because the reactions which had produced the ozone higher up had absorbed the UV radiation in the process. No activation energy to produce any more ozone. That explained the lower limit of ozone. The upper limit was explained by the lack of oxygen in the upper stratosphere to produce a stable layer – for production to exceed destruction. This was interesting confirmation of observations made earlier by the meteorologist and balloonist Léon-Phillippe Teisserenc de Bort, who noted that, contrary to his expectations, the air temperature didn’t fall gradually with altitude but reached a point of stabilisation where the air even seemed to become warmer. He named this upper layer of air the stratosphere, and the cooler more turbulent layer below he called the troposphere. It’s now known that this upper-air warming is caused by the absorption of UV radiation by ozone.

Our picture of ozone still had some holes in it, however, as it seemed there was a lot less of it around than the calculations of Chapman suggested. To quote from Beerling’s book: 

… there had to be some as-yet unappreciated means by which ozone was being destroyed. The fundamental leap required to solve the problem was taken comparatively recently, in 1970, by a then young scientist called Paul Crutzen. Crutzen showed that, remarkably, the oxides of nitrogen, produced by soil microbes, catalysed the destruction of ozone many kilometres up in the stratosphere. Few people appreciate the marvellous fact that the cycling of nitrogen by the biosphere exerts an influence on the global ozone layer: life on Earth reaches out to the chemistry of the stratosphere. 

Now to explain why the hole in the ozone shield occurred above the Antarctic. My understanding and explanation starts with reading Beerling and ends with this post from the USA’s National Oceanic and Atmospheric Administration’s Earth System Research Laboratory (NOAA/ESRL). 

The ozone hole over Antarctica varies in size, and is largest in the months of winter and early spring. During these months, due to the large and mountainous land mass there, average minimum temperatures can reach as low as −90°C, which is on average 10°C lower than Arctic winter minimums (Arctic temperatures are generally more variable than in the Antarctic). When winter minimums fall below around −78°C at the poles, polar stratospheric clouds are formed, and this happens far more often in the Antarctic – for about five months in the year. Chemical reactions between halogen gases and these clouds produce the highly reactive gases chlorine monoxide (ClO) and bromine monoxide (BrO), which are destructive to ozone. 

this graphic shows that the Antarctic stratosphere is consistently colder, and less variable in temperature, than the Arctic. Polar stratospheric clouds (PSCs) form at −78°C

Most ozone is produced in the tropical stratosphere, in reactions driven by sunlight, but a slow movement of stratospheric air, known as the Brewer-Dobson circulation, transports it over time to the poles, so that ozone ends up being more sparse in the tropics. Interestingly, although most ozone-depleting substances – mainly halogen gases – are produced in the more humanly-populated northern hemisphere, complex tropospheric convection patterns distribute the gases more or less evenly throughout the lower atmosphere. Once in the stratosphere and distributed to the poles, the air carrying the halogen-gas products becomes isolated due to strong circumpolar winds, which are at their height during winter and early spring. This isolation preserves ozone depletion reactions for many weeks or months. The polar vortex at the Antarctic, being stronger than in the Arctic, is more effective in reducing the flow of ozone from tropical regions. 

So – I’ve looked here briefly at what ozone is, where it is, and how it’s produced and destroyed, but I haven’t really touched on its importance for protecting life here on Earth. So that, and whether its depletion may have had catastrophic consequences 250 million years ago, will be the focus of my next post. 


The Emerald Planet, by David Beerling, Oxford Landmark Science, 2009–Dobson_circulation

Written by stewart henderson

October 3, 2018 at 9:24 pm

a little about the chemistry of water and its presence on Earth

leave a comment »

So I now know, following my previous post, a little more than I did about how water’s formed from molecular hydrogen and oxygen – you have to break the molecular bonds and create new ones for H2O, and that requires activation energy, I think. But I need to explore all of this further, and I want to do so in the context of a fascinating question, which I’m hoping is related – why is there so much water on Earth’s surface?

When Earth was first formed, from planetesimals energetically colliding together, generating lots of heat (which may have helped with the creation of H2O, but not in liquid form??) there just doesn’t seem to have been a place for water, which would’ve evaporated into space, wouldn’t it? Presumably the still-forming, virtually molten Earth had no atmosphere. 

The most common theory put out for Earth’s water is bombardment in the early days by meteors of a certain type, carbonaceous chondrites. These meteors were formed further out from the sun, where water would have frozen. Carbonaceous chondrites are known to contain the same ratio of heavy water to ‘normal’ water as we find on Earth. Heavy water is formed with deuterium, an isotope of hydrogen containing a neutron as well as the usual proton. Obviously there had to have been plenty of these collisions over a long period to create our oceans. Comets have been largely ruled out because, of the comets we’ve examined, the deuterium/hydrogen ratio is about double that of the chondrites, though some have argued that those comets may be atypical. Also there’s some evidence that the D/H ratio of terrestrial water has changed over time.

So there are still plenty of unknowns about the history of Earth’s water. Some argue that volcanism, along with other internal sources, was wholly or partly responsible – water vapour is one of the gases produced in eruptions, which then condensed and fell as rain. Investigation of moon rocks has revealed a D/H ratio similar to that of chondrites, and also that of Earth (yes, there’s H2O on the moon, in various forms). This suggests that, since it has become clear that the Moon and Earth are of a piece, water has been there on both from the earliest times. Water ice detected in the asteroid belt and elsewhere in the solar system provides further evidence of the abundance of this hardy little molecule, which enriches the hypotheses of researchers. 

But I’m still mystified by how water is formed from molecular, or diatomic, hydrogen and oxygen. It occurs to me, thanks to Salman Khan, that having a look at the structural formulae of these molecules, as well as investigating ‘activation energy’, might help. I’ve filched the ‘Lewis structure’ of water from Wikipedia.

It shows that hydrogen atoms are joined to oxygen by a single bond, the sharing of a pair of electrons. They’re called polar covalent bonds, as described in my last post on the topic. H2 also binds the two hydrogen atoms with a single covalent bond, while O2 is bound in a double covalent bond. (If you’re looking for a really comprehensive breakdown of the electrochemical structure of water, I recommend this site).

So, to produce water, you need enough activation energy to break the bonds of H2 and O2 and create the bonds that form H2O. Interestingly, I’m currently reading The Emerald Planet, which gives an example of the kind of activation energy required. The Tunguska event, an asteroid visitation in the Siberian tundra in 1908, was energetic enough to rip apart the bonds of molecular nitrogen and oxygen in the surrounding atmosphere, leaving atomic nitrogen and oxygen to bond into nitric oxide. But let’s have a closer look at activation energy. 

So, according to Wikipedia:

In chemistry and physics, activation energy is the energy which must be available to a chemical or nuclear system with potential reactants to result in: a chemical reaction, nuclear reaction, or various other physical phenomena.

This stuff gets complicated and mathematical very quickly, but activation energy (Ea) is measured in either joules (or kilojoules) per mole or kilocalories per mole. A mole, as I’ve learned from Khan, is the number of atoms there are in 12g of carbon-12. So what? Well, that’s just a way of translating atomic mass units (amu) to grams (one gram equals one mole of amu). 

The point is though that we can measure the activation energy, which, in the case of molecular reactions, is going to be more than the measurable change between the initial and final conditions. Activation energy destabilises the molecules, bringing about a transition state in which usually stable bonds break down, freeing the molecules to create new bonds – something that is happening throughout our bodies at every moment. When molecular oxygen is combined with molecular hydrogen in a confined space, all that’s required is the heat from a lit match to start things off. This absorption of energy is called an endothermic reaction. Molecules near the fire break down into atoms, which recombine into water molecules, a reaction which releases a lot of energy, creating a chain of reactions until all the molecules are similarly recombined. From this you can imagine how water could have been created in abundance during the fiery early period of our solar system’s evolution. 

I’ll end with more on the structure of water, for my education. 

As a liquid, water has a structure in which the H-O-H angle is about 106°. It’s a polarised molecule, with the negative charge on the oxygen being around 70% of an electron’s negative charge, which is neutralised by a corresponding positive charge shared by the two hydrogen atoms. These values can change according to energy levels and environment. As opposite charges attract, different water molecules attract each other when their H atoms are oriented to other O atoms. The British Chemistry professor Martin Chaplin puts it better than I could:

This attraction is particularly strong when the O-H bond from one water molecule points directly at a nearby oxygen atom in another water molecule, that is, when the three atoms O-H O are in a straight line. This is called ‘hydrogen bonding’ as the hydrogen atoms appear to hold on to both O atoms. This attraction between neighboring water molecules, together with the high-density of molecules due to their small size, produces a great cohesive effect within liquid water that is responsible for water’s liquid nature at ambient temperatures.

We’re all very grateful for that nature. 

Written by stewart henderson

September 24, 2018 at 10:32 am

Posted in chemistry, science, water

Tagged with , , ,

exploring oxygen

leave a comment »

I’d much prefer choccy cigars


I’ve been reading David Beerling’s fascinating but demanding book The Emerald Planet, essentially a history of plants, and their contribution to our current life-sustaining atmosphere, and it has inspired me to get a handle on atmospheric oxygen in general and the properties of this rather important diatomic molecule. Demanding because, as always, basic science doesn’t come naturally to me so I have to explain it to myself in great detail to really pin it down, and then I forget. For example, I don’t have any understanding of oxidation right now, though I’ve read about it, and probably written about it, and more or less understood it, many times. Things fall apart, and then we fall apart…

Okay, let me pull myself together. Oxygen is a highly reactive gas, combining with other elements readily in a number of ways. A bushfire is an example of oxidation, in which free oxygen is ‘consumed’ rapidly, reacting with carbon in the dry wood to produce carbon dioxide, among other gases. This is also called combustion. Rust is a slower form of oxidation, in which iron reacts with oxygen to form iron oxide. So I think that’s basically what oxidation is, the trapping of ‘free’ oxygen into other gases or compounds, think carbon monoxide, sulphur dioxide, hydrogen peroxide, etc etc. Not to mention its reaction with hydrogen to form water, that stuff that makes up more than half our bodily mass. 

Well, I’m wrong. Oxidation doesn’t have to involve oxygen at all. Which I think is criminally confusing. Yes, fire and rust are examples of oxidation reactions, but so is a reaction between hydrogen and fluorine gas to produce hydrofluoric acid (it’s actually a redox reaction – hydrogen is being oxidised and fluorine is being reduced). According to this presumably reliable definition, ‘oxidation is the loss of electrons during a reaction by a molecule, atom or ion’. Reduction is the opposite. The reason it’s called oxidation is historical – oxygen, the gas that Priestley and Lavoisier famously argued over, was the first gas known to engage in this sort of  behaviour. Basically, oxygen oxidises other elements, getting them to hand over their electrons – it’s an electron thief. 

Oxygen has six valence electrons, so needs another two to feel ‘complete’. It’s diatomic in nature, existing around us as O2. I’m not sure how that works – if each individual atom wants two electrons, to make eight electrons in its outer shell for stability, why would it join with another oxygen to complete this outer shell, and then some? That makes for another four electrons. Are they now valence electrons? Apparently not, in this stable diatomic form. Here’s an expert’s attempt to explain this, from Quora

For oxygen to have a full outer shell it must have 8 electrons in it. But it only has 6 electrons in its valence shell. Each oxygen atom is actively seeking to get more electrons to complete its valence shell. If no other atoms except oxygen atoms are available, each oxygen atom will try to wrestle extra valence electrons from another oxygen atom. So if one oxygen atom merges with another, they “share” electrons, giving both a full outer shell and ultimately being virtually unreactive.

For a while this didn’t make sense to me, mathematically. Atomic oxygen has eight electrons around one nucleus. Six in the outer, ‘valence’ shell. Molecular oxygen has 16 electrons around two nuclei. What’s the configuration to make it stable? Presumably both nuclei still have 2 electrons configured in their first shells, that makes 12 electrons to make for a stable configuration, which doesn’t seem to work out. Did it have something to do with ‘sharing’? Are the shells configured now around both nuclei instead of separately around each nucleus? What was I missing here? Another expert on the same website writes this:

[The two oxygen atoms combine to] create dioxygen, a molecule (O2) in which both oxygen atoms have 8 valence electrons, so they are happy (the molecule is stable).

But what about the extra electrons? It seems I’d have to give up on understanding and take the experts’ word, and I hate that. However, the Khan academy has come to the rescue. In video 14 of his chemistry series, Khan explains that the two atoms share two pairs of electrons, so yes, sharing was the key.  So each atom can ‘kind of pretend’, in Khan’s words, that they have eight valence electrons. And this is a covalent bond, unlike an ionic bond which combines metals with non-metals, such as sodium and chloride. 

Anyway, moving on. One of the most important features of oxygen, as mentioned, is its role in water – which is about 89% oxygen by weight. But how do these two elements – diatomic molecules as we find them in our environment – actually come together to form such a very different substance?

Well. According to this video, when H2 and O2, and presumably other molecules, are formed

electrons lose energy to form the new orbitals, the energy gets away as a photon, and then the new orbitals are stuck that way, they can’t undo themselves until the missing energy comes back.

This set me on my heels when I heard it, I’d never heard anything like it before, possibly because photon stuff tends to belong to physics rather than chemistry, though photosynthesis rather undoes that argument…

So, sticking with this video (from Brigham Young University Physics Department), to create water from H2 and O2 you need to give them back some of that lost energy, in the form of ‘activation energy’, e.g by ‘striking a match’. The video turns out to be kind of funny/scary, and it again involves photons. After the explosion, water vapour was found condensing on the inside of the glass through which hydrogen was pumped and ignited…

Certainly the demonstration was memorable (and there are a few of these explosive vids online), but I think I need more theory. Hopefully I’ll get back to it, but it seems to have much to do with the strong covalent bonds that form, for example, molecular hydrogen. It requires a lot of energy to break them. 

Once formed, water is very stable because the oxygen’s six valence electrons get two extras, one from each of the hydrogens, while the hydrogens get an extra electron each. The atoms are stuck together in a type of bonding called polar covalent. Oxygen is more electronegative than hydrogen, meaning it attracts electrons more strongly – the negative charge is polarised at the oxygen, giving that part of the molecule a partial negative charge, with a proportional positive charge at the hydrogens. I might explore the effects of this polarity in another post.

The percentage of oxygen in our atmosphere seems stable at 21% – that’s to say, it appears to be the same now as when I was born, but that’s not a lot of time, geologically. The issue of oxygen levels in our atmosphere over geological time is complex and contested, but the usual story is that something happened with the prokaryotic life forms that had evolved in the oceans billions of years ago, some kind of mutation which enabled a bacterial species to capture and harness solar energy. This green mutation, cyanobacteria, gave off gaseous oxygen as a waste product – a disaster for other life forms due to its highly reactive nature. The photosynthesising cyanobacteria, however, multiplied rapidly, oxygenising the ocean. Oxygen reacted with the ocean’s iron, creating layers of rust (iron oxide) on the ocean floor, later visible on land through tectonic forces over the eons. Gradually over time, other organisms evolved that were adapted to the new oxygen-rich atmosphere. It became an energy source, which in turn produced its own waste product, carbon dioxide. This created a near-perfect cycle, as cyanobacteria required CO2 as well as water and sunlight to produce oxygen (and sugar). Other photosynthesising water-based and land-based life forms, plants in particular, emerged. In fact, these life forms had harnessed cyanobacteria as chloroplasts, a process known as endosymbiosis. 

I’ll end this bitsy post with the apparent fact, according to this Inverse article, that our oxygen levels are actually falling, and have been for near a million years, and that’s leaving aside the far greater effects due to human activity (fossil fuel burning consumes oxygen and releases CO2). Of course oxygen is very vastly more abundant in the atmosphere than CO2, and the change is minuscule on the overall scale of things (unlike the change we’re making to CO2 levels). It will make much more of a difference in the oceans however, where the lack of dissolved oxygen is creating dead zones. The article explains:

 The primary contributor to these apocalyptic scenes is fertilizer runoff from agriculture, which causes algal blooms, providing a great feast for bacteria that consume oxygen. The abundance of these bacteria cause O2 levels to plummet, and if they go low enough, organisms that need it to survive swim away or die.

Just another of the threats to sea-life caused by humans. 

Written by stewart henderson

September 16, 2018 at 4:20 pm

Posted in environment, science

Tagged with , ,

Always chemical: how to reflect upon naturopathic remedies

leave a comment »

most efficacious in every case

So here’s an interesting story. When I was laid up with a bronchial virus (RSV) a few weeks ago, coughing my lungs up and having difficulty breathing, with a distinct, audible wheeze, I was offered advice, as you do, by a very well-meaning person about a really effective treatment – oregano oil. This person explained that, on two occasions, he’d come down with a bad cough and oregano oil had done the trick perfectly where nothing else worked.

I didn’t try the oregano oil. I followed my doctor’s recommendation and used the symptom-relieving medications described in a previous post, and I’m much better now. What I did do was look up what the science-based medicine site had to say about the treatment (I’d never heard of oregano oil, though I’ve had many other plant-based cures suggested to me, such as echinacea, marshmallow root and slippery elm – well ok I lied, I found the last two on a herbal medicine website).

I highly recommend the science-based medicine website, which has been run by the impressively-credentialed Drs David Gorski and Steve Novella and their collaborators for years now, and which thusly has a vast database of debunked or questionable treatments to explore. It’s the best port of call when you’re offered anecdotal advice about any treatment whatsoever by well-wishers. Not that they’re the only people performing this service to the public. Quackwatch, SkepDoc, and Neurologica are just some of the websites doing great work, but they’re outnumbered vastly by sites spreading misinformation and bogus cures, unfortunately. It’s almost a catch-22 of the internet that you have to be informed enough to use it to get the best information out of it.

As to oregano oil specifically, Scott Gavura at science-based medicine proves a detailed account. I will summarise here, while also providing my own take. Firstly people need to know that when a substance, any substance –  a herb or a plant, an oil extracted therefrom, or a tablet, capsule or mixture,something injectable or applied to the skin, whatever – is suggested as a treatment for a condition, they should consider this simple mantra – always chemical. That’s to say, a treatment will only work because it has the right chemistry to act against the treated condition. In other words you need to know something (or rather a lot) about the chemistry of the treating substance and the chemistry of the condition being treated. It’s no good saying ‘x is great for getting rid of coughs – it got rid of mine,’ because your cough may not have the same chemical cause as mine, and your cough in February 2007 may not have the same chemical cause as your cough in August 2017. My recent cough was caused by a virus (and perhaps I should change the mantra – always biochemical – but still it’s the chemistry of the bug that’s causing the problem), but no questions were asked about the cause before the advice was given. And you’ll notice when you look at naturopathic websites that chemistry is very rarely mentioned. And I’m not talking about toxins.

Gavura gives this five-point test for an effective treatment:

When we contemplate administering a chemical to deliver a medicinal effect, we need to ask the following:

  1. Is it absorbed into the body at all?
  2. Does enough reach the right part of the body to have an effect?
  3. Does it actually work for the condition?
  4. Does it have any hazardous, unwanted effects?
  5. Can it be safely eliminated from the body?

The answer to Q1 is that oregano oil contains a wide variety of chemical compounds, particularly phenolic compounds (71%). It’s these phenolic compounds that are touted as having the principal beneficial effects. However, though we know that there’s some absorption, we don’t have a chemical breakdown. We just don’t know which phenolic compounds are being absorbed or how much.

Q2 – No research on this, or on absorption generally. Topical effects (ie effects on the skin) are more likely to be beneficial than ingested effects, as the oil can maintain high concentration. This would have no effect on a cough.

Q3 – According to one manufacturer the oil has ‘scientifically proven results against almost every virus, bacteria, parasite, and fungi…’ (etc, etc, but shouldn’t that be bacterium and fungus?). In fact, no serious scientific research has ever been conducted on oregano oil and its effectiveness for any condition whatsoever. So the answer to this question is  – no evidence, beyond anecdote.

Q4 – There have been reports of allergic reactions and gastro-intestinal upsets, but the naturopathy industry is more or less completely unregulated so you can never be sure what you’re getting with any bottle of pills or ‘essential oils’. As Gavura points out, the lack of research on possible adverse effects, for this and other ‘natural’ treatments, is of concern for vulnerable consumers, such as pregnant women, young or unborn children, and those with pre-existing conditions.

Q5 – At low doses, there’s surely no concern, but nobody has done any research about dosing up on carvacrol, the most prominent component of oregano oil, which gives the plant its characteristic odour. Other organic components are thymol and cymene.


So there’s no solid evidence about oregano oil, or about the mechanism for its supposed efficacy. But what if my well-wisher was correct, and something in the oregano oil cleared up his cough – twice? And did so really really well? Better than several other treatments he tried?

Well, then we might be onto something. Surely a potential billion-dollar gold-mine, considering how debilitating your common-or-garden cough can be. And how, if not cleared up, it can leading to something way more serious.

So how would a person who is sure that oregano oil has fantastic curative properties (because it sure worked for him) go about capitalising on this potential gold-mine? Well, first he would need evidence. His own circle of friends would not be enough – perhaps he could harness social media to see if there were sufficient people willing to testify to oregano oil curing their cough, where other treatments failed. Then , if he had sufficient numbers, he might try to find out the causes of these coughs. Bacterial, viral, something else, cause unknown? It’s likely he wouldn’t make much headway there (most people with common-or-garden coughs don’t go to the doctor or submit to biochemical testing, they just try to ride it out), but no matter, that might just be evidence that the manufacturer was right – it’s effective against a multitude of conditions. And yet, it seems that oregano oil is a well-kept secret, only known to naturopathic companies and health food store owners. Doctors don’t seem to be prescribing it. Why not?

Clearly it’s because Big Pharma doesn’t support the stuff. Doctors are in cahoots with Big Pharma to sell attractive pills with long pharmacological names and precise dosages and complex directions for use. Together they like to own the narrative, and a multi-billion dollar industry is unlikely to be had from an oil you can extract from a backyard plant.


Our hero’s investment of time and energy has convinced him there’s heaps of money to be made from oregano oil’s miraculous properties, but that same investment has also convinced him that it’s the chemical properties that are key, and that if the correct chemical formula can be isolated, refined and commercialised, not only will he be able to spend the rest of his life in luxury hotels around the globe, but he will have actually saved lives and contributed handsomely to the betterment of society. So he will join Big Pharma rather than trying to beat it. Yes, there would have to be a massive upfront outlay to perform tests, presumably on rats or mice at first, to find out which chemical components or combinations thereof do the best job of curing the animals, who would have to be artificially infected with various bugs affecting the respiratory system, or any other bodily system, since there are claims that the oil, like Lily the Pink’s Medicinal Compound™, is ‘most efficacious in every case’.

But of course it would be difficult for any average bloke like our hero to scratch up the funds to build or hire labs testing and purifying a cure-all chemical extract of oregano oil. Crowdsourcing maybe, considering all the testimonials? Or just find an ambitious and forward-thinking wealthy entrepreneur?

Is that the only problem with the lack of acceptance, by the medical community, of all the much-touted naturopathic cures out there? Lack of funds to go through the painstaking process of getting a purefied product to pass through a system which ends with double-blind, randomised, placebo-controlled human studies with large sample sizes?

Permit me to be sceptical. It’s not as if the chemical components of most herbal remedies are unknown. It’s highly unlikely that pharmacologists, who are in the business of examining the chemistry of substances and their effects for good or ill on the human body, haven’t considered the claimed cornucopia of naturopathic treatments and the possibility of bringing them into the mainstream of science-based medicine to the benefit of all. Yes, it’s possible that they’ve missed something, but it’s also possible, indeed more likely, that people underestimate the capacity of our fabulous immune system, the product of millions of years of evolution, to bring us back to health when we’re struck down by the odd harmful bug. When we’re struck down like this, we either recover or we die, and if we don’t die, we tend to attribute our recovery to any treatment applied. Sometimes we might be right, but it pays to be skeptical and to do research into a treatment, and into what ails us, before making such attributions. And to do so with the help of a good science-based medical practitioner. And remember again that motto: always chemical. 


Written by stewart henderson

August 24, 2018 at 10:18 am

on electrickery, part 2 – the beginnings

leave a comment »

William Gilbert, author of De Magnete, 1600

Canto: So let’s now start at the beginning. What we now call electricity, or even electromagnetism, has been observed and questioned since antiquity. People would’ve wondered about lightning and electrostatic shocks and so forth.

Jacinta: And by an electrostatic shock, you mean the sort we get sometimes when we touch a metal door handle? How does that work, and why do we call it electrostatic?

Canto: Well we could do a whole post on static electricity, and maybe we should, but it happens when electrons – excess electrons if you like – move from your hand to the conductive metal. This is a kind of electrical discharge. For it to have happened you need to have built up electric charge in your body. Static electricity is charge that builds up through contact with clothing, carpet etc. It’s called static because it has nowhere to go unless it comes into contact with a positive conductor.

Jacinta: Yes and it’s more common on dry days, because water molecules in the atmosphere help to dissipate electrons, reducing the charge in your body.

Canto: So the action of your shoes when walking on carpet – and rubber soles are worst for this – creates a transfer of electrons, as does rubbing a plastic rod with wooden cloth. In fact amber, a plastic-like tree resin, was called ‘elektron’ in ancient Greek. It was noticed in those days that jewellery made from amber often stuck to clothing, like a magnet, causing much wonderment no doubt.

Jacinta: But there’s this idea of ‘earthing’, can you explain that?

Canto: It’s not an idea, it’s a thing. It’s also called grounding, though probably earthing is better because it refers to the physical/electrical properties of the Earth. I can’t go into too much detail on this, its complexity is way above my head, but generally earthing an electrical current means dissipating it for safety purposes – though the Earth can also be used as an electrical conductor, if a rather unreliable one. I won’t go any further as I’m sure to get it wrong if I haven’t already.

Jacinta: Okay, so looking at the ‘modern’ history of our understanding of electricity and magnetism, Elizabethan England might be a good place to start. In the 1570s mathematically minded seamen and navigators such as William Borough and Robert Norman were noting certain magnetic properties of the Earth, and Norman worked out a way of measuring magnetic inclination in 1581. That’s the angle made with the horizon, which can be positive or negative depending on position. It all has to do with the Earth’s magnetic field lines, which don’t run parallel to the surface. Norman’s work was a major inspiration for William Gilbert, physician to Elizabeth I and a tireless experimenter, who published De Magnete (On the Magnet – the short title) in 1600. He rightly concluded that the Earth was itself a magnet, and correctly proposed that it had an iron core. He was the first to use the term ‘electric force’, through studying the electrostatic properties of amber.

Canto: Yes, Gilbert’s work was a milestone in modern physics, greatly influencing Kepler and Galileo. He collected under one head just about everything that was known about magnetism at the time, though he considered it a separate phenomenon from electricity. Easier for me to talk in these historical terms than in physics terms, where I get lost in the complexities within a few sentences.

Jacinta: I know the feeling, but here’s a relatively simple explanation of earthing/grounding from a ‘physics stack exchange’ which I hope is accurate:

Grounding a charged rod means neutralizing that rod. If the rod contains excess positive charge, once grounded the electrons from the ground neutralize the positive charge on the rod. If the rod is having an excess of negative charge, the excess charge flows to the ground. So the ground behaves like an infinite reservoir of electrons.

So the ground’s a sink for electrons but also a source of them.

Canto: Okay, so if we go the historical route we should mention a Chinese savant of the 11th century, Shen Kuo, who wrote about magnetism, compasses and navigation. Chinese navigators were regularly using the lodestone in the 12th century. But moving into the European renaissance, the great mathematician and polymath Gerolamo Cardano can’t be passed by. He was one of the era’s true originals, and he wrote about electricity and magnetism in the mid-16th century, describing them as separate entities.

Jacinta: But William Gilbert’s experiments advanced our knowledge much further. He found that heat and moisture negatively affected the ‘electrification’ of materials, of which there were many besides amber. Still, progress in this era, when idle curiosity was frowned upon, was slow, and nothing much else happened in the field until the work of Otto von Guericke and Robert Boyle in the mid-17th century. They were both interested particularly in the properties, electrical and otherwise, of vacuums.

Canto: But the electrical properties of vacuum tubes weren’t really explored until well into the 18th century. Certain practical developments had occurred though. The ‘electrostatic machine’ was first developed, in primitive form, by von Guericke, and improved throughout the 17th and 18th centuries, but they were often seen as little more than a sparky curiosity. There were some theoretical postulations about electrics and non-electrics, including a duel-fluid theory, all of which anticipated the concept of conductors and insulators. Breakthroughs occurred in the 1740s with the invention of the Leyden Jar, and with experiments in electrical signalling. For example, an ingenious experiment of 1746, conducted by Jean-Antoine Nollet, which connected 200 monks by wires to form a 1.6 kilometre circle, showed that the speed of electrical transmission was very high! Experiments in ‘electrotherapy’ were also carried out on plants, with mixed results.

Jacinta: And in the US, from around this time, Benjamin Franklin carried out his experiments with lightning and kites, and he’s generally credited with the idea of positive to negative electrical flow, though theories of what electricity actually is remained vague. But it seems that Franklin’s fame provided impetus to the field. Franklin’s experiments connected lightning and electricity once and for all, though similar work, both experimental and theoretical, was being conducted in France, England and elsewhere.

Canto: Yes, there’s a giant roll-call of eighteenth century researchers and investigators – among them Luigi Galvani, Jean Jallabert, John Canton, Ebenezer Kinnersley, Giovanni Beccaria, Joseph Priestley, Mathias Bose, Franz Aepinus, Henry Cavendish, Charles-Augustin Coulomb and Alessandro Volta, who progressed our understanding of electrical and magnetic phenomena, so that modern concepts like electric potential, charge, capacitance, current and the like, were being formalised by the end of that century.

Jacinta: Yes, for example Coulomb discovered, or published, a very important inverse-square law in 1784, which I don’t have the wherewithal to put here mathematically, but it states that:

The magnitude of the electrostatic force of attraction between two point charges is directly proportional to the product of the magnitudes of charges and inversely proportional to the square of the distance between them.

This law was an essential first step in the theory of electromagnetism, and it was anticipated by other researchers, including Priestley, Aepinus and Cavendish.

get it?

Canto: And Volta produced the first electric battery, which he demonstrated before Napoleon at the beginning of the 19th century.

Jacinta: And of course this led to further experimentation – almost impossible to trace the different pathways and directions opened up. In England, Humphrey Davy and later Faraday conducted experiments in electrochemistry, and Davy invented the first form of electric light in 1809. Scientists, mathematicians, experimenters and inventors of the early nineteenth century who made valuable contributions include Hans Christian Orsted, Andre-Marie Ampere, Georg Simon Ohm and Joseph Henry, though there were many others. Probably the most important experimenter of the period, in both electricity and magnetism, was Michael Faraday, though his knowledge of mathematics was very limited. It was James Clerk Maxwell, one of the century’s most gifted mathematicians, who was able to use Faraday’s findings into mathematical equations, and more importantly, to conceive of the relationship between electricity, magnetism and light in a profoundly different way, to some extent anticipating the work of Einstein.

Canto: And we should leave it there, because we really hardly know what we’re talking about.

Jacinta: Too right – my reading up on this stuff brings my own ignorance to mind with the force of a very large electrostatic discharge….

now try these..

Written by stewart henderson

October 22, 2017 at 10:09 am