Archive for the ‘curiosity’ Category
In 2009, a poll held by the United Kingdom’s Engineering & Technology magazine found that 25% of those surveyed did not believe that men landed on the Moon. Another poll gives that 25% of 18- to 25-year-olds surveyed were unsure that the landings happened. There are subcultures worldwide which advocate the belief that the Moon landings were faked. By 1977 the Hare Krishna magazine Back to Godhead called the landings a hoax, claiming that, since the Sun is 93,000,000 miles away, and “according to Hindu mythology the Moon is 800,000 miles farther away than that”, the Moon would be nearly 94,000,000 miles away; to travel that span in 91 hours would require a speed of more than a million miles per hour, “a patently impossible feat even by the scientists’ calculations.”
From ‘Moon landing conspiracy theories’ , Wikipedia
Haha just for the record the Sun is nearly 400 times further from us than the Moon, but who’s counting? So now to the Apollo moon missions, and because I don’t want this exploration to extend to a fourth part, I’ll be necessarily but reluctantly brief. They began in 1961 and ended in 1975, and they included manned and unmanned space flights (none of them were womanned).
But… just one more general point. While we may treat it as inevitable that many people prefer to believe in hoaxes and gazillion-dollar deceptions, rather than accept facts that are as soundly evidence-based as their own odd existences, it seems to me a horrible offence in this case (as in many others), both to human ingenuity and to the enormous cost in terms, not only of labour spent but of lives lost. So we need to fight this offensive behaviour, and point people to the evidence, and not let them get away with their ignorance.
The Apollo program was conceived in 1960 during Eisenhower’s Presidency, well before Kennedy’s famous mission statement. It was given impetus by Soviet successes in space. It involved the largest commitment of financial and other resources in peacetime history. The first years of research, development and testing involved a number of launch vehicles, command modules and lunar modules, as well as four possible ‘mission modes’. The first of these modes was ‘direct ascent’, in which the spacecraft would be launched and operated as a single unit. Finally, after much analysis, debate and lobbying, the mode known as Lunar Orbit Rendezvous (LOR) was adopted. The early phases of the program were dogged by technical problems, developmental delays, personal clashes and political issues, including the Cuban missile crisis. Kennedy’s principal science advisor, Jerome Weisner, was solidly opposed to manned missions.
I can’t give a simple one-by-one account of the missions, as the early unmanned missions weren’t simply named Apollo 1, 2 etc. They were associated strongly with the Saturn launch vehicles, and the Apollo numbering system we now recognise was only established in April 1967. The Apollo 4 mission, for example, is also known as AS-501, and was the first unmanned test flight of the Saturn 5 launcher (later used for the Apollo 11 launch). Three Apollo/Saturn unmanned missions took place in 1966 using the Saturn 1B launch vehicle.
The manned missions had the most tragic of beginnings, as is well known. On January 27 1967 the three designated astronauts for the AS-204 spaceflight, which they themselves had renamed Apollo 1 to commemorate the first manned flight of the program, were asphyxiated when a fire broke out during a rehearsal test. No further attempt at a manned mission was made until October of 1968. In fact, the whole program was grounded after the accident for ‘review and redesign’ with an overall tightening of hazardous procedures. In early 1968, the Lunar Module was given its first unmanned flight (Apollo 5). The flight was delayed a number of times due to problems and inexperience in constructing such a module. The test run wasn’t entirely successful, but successful enough to clear the module for future manned flights. The following, final unmanned mission, Apollo 6, suffered numerous failures, but went largely unnoticed due to the assassination of Martin Luther King on the day of the launch. However, its problems helped NASA to apply fixes which improved the safety of all subsequent missions.
And so we get to the first successful manned mission, Apollo 7. Its aim was to test the Apollo CSM (Command & Service Module) in low Earth orbit, and it put American astronauts in space for the first time in almost two years. It was also the first of the three-man missions and the first to be broadcasted from within the spaceship. Things went very well in technical terms, a relief to the crew, who were only given this opportunity due to the deaths of the Apollo 1 astronauts. There were some minor tensions between the astronauts and ground staff, due to illness and some of the onboard conditions. They spent 11 days in orbit and space food, though on the improve, was far from ideal.
Apollo 8, launched only two months later in December, was a real breakthrough, a truly bold venture, as described in Earthrise, an excellent documentary of the mission made in 2005 (the astronauts were the first to witness Earthrise from the Moon). The aim, clearly, was to create a high-profile event designed to capture the world’s attention, and to eclipse the Soviets. As the documentary points out, the Soviets had stolen the limelight in the space race – ‘the first satellite, the first man in orbit, the first long duration flight, the first dual capsule flights, the first woman in space, the first space walk’. Not to mention the first landing of a human-built craft on the Moon itself.
The original aim of the mission was to test the complete spacecraft, including the lunar module, in Earth orbit, but when the lunar module was declared unready, a radical change of plan was devised, involving an orbit of the Moon without the lunar module. Apollo 8 orbited the Moon ten times at close quarters (110 kms above the surface) over a period of 20 hours. During the orbit they made a Christmas Eve telecast, the most watched program ever, up to that time. Do yourself a favour and watch the doco. The commentary of the astronaut’s wives are memorable, and put the moon hoaxers’ offensiveness in sharp relief.
By comparison to Apollo 8 the Apollo 9 mission (March ’69) was a modest affair, if that’s not too insulting. This time the complete spacecraft for a Moon landing was tested in low Earth orbit, and everything went off well, though space walking proved problematic, as it often had before for both American and Soviet astronauts, due to space sickness and other problems. With Apollo 10 (May ’69) the mission returned to the Moon in a full dress rehearsal of the Apollo 11 landing. The mission created some interesting records, including the fastest speed ever reached by a manned vehicle (39,900 kms/hour during the return flight from the Moon) and the greatest distance from home ever travelled by humans (due to the Moon’s elliptical orbit, and the fact that the USA was on the ‘far side of the Earth’ when the astronauts were on the far side of the Moon).
I’ll pass by the celebrated Apollo 11 mission, which I can hardly add anything to, and turn to the missions I know less – that’s to say almost nothing – about.
Apollo 12, launched in November 1969, was a highly successful mission, in spite of some hairy moments due to lightning strikes at launch. It was, inter alia, a successful exercise in precision targeting, as it landed a brief walk away from the Surveyor 3 probe, sent to the Moon two and a half years earlier. Parts of the probe were taken back to Earth.
The Apollo 13 mission has, for better or worse, come to be the second most famous of all the Apollo missions. It was the only aborted mission of those intended to land on the Moon. An oxygen tank exploded just over two days after launch in April 1970, and just before entry into the Moon’s gravitational sphere. This directly affected the Service Module, and it was decided to abort the landing. There were some well-documented hairy moments and heroics, but the crew managed to return safely. Mea culpa, I’ve not yet seen the movie!
Apollo 14, launched at the end of January 1971, also had its glitches but landed successfully. The astronauts collected quite a horde of moon rocks and did the longest moonwalk ever recorded. Alan Shepard, the mission commander, added his Moon visit to the accolade of being the first American in space ten years earlier. At 47, he’s the oldest man to have stepped on the Moon. The Apollo 15 mission was the first of the three ‘J missions’, involving a longer stay on the Moon. With each mission there were improvements in instrumentation and capability. The most well-known of these was the Lunar Roving Vehicle, first used on Apollo 15, but that mission also deployed a gamma-ray spectrometer, a mass spectrometer and a laser altimeter to study the Moon’s surface in detail from the command module. Apollo 16 was another successful mission, in which the geology of the Moon’s surface was the major focus. Almost 100kgs of rock were collected, and it was the first mission to visit the ‘lunar highlands’. The final mission, Apollo 17, was also the longest Moon stay, longest moonwalks in total, largest samples, and longest lunar orbit. And so the adventure ended, with high hopes for the future.
I’ve given an incredibly skimpy account, and I’ve mentioned very few names, but there’s a ton of material out there, particularly on the NASA site of course, and documentaries aplenty, many of them a powerful and stirring reminder of those heady days. Some 400,000 technicians, engineers, administrators and other service personnel worked on the Apollo missions, many of them working long hours, experiencing many frustrations, anxieties, and of course thrills. I have to say, as an internationalist by conviction, I’m happy to see that space exploration has become more of a collaborative affair in recent decades, and may that collaboration continue, defying the insularity and mindless nationalism we’ve been experiencing recently.
Finally, to the moon hoaxers and ‘skeptics’. What I noticed on researching this – I mean it really was obvious – was that in the comments to the various docos I watched on youtube, they had nothing to say about the science and seemed totally lacking in curiosity. It was all just parroted, and ‘arrogant’ denialism. The science buffs, on the other hand, were full of dizzy geekspeak on technical fixes, data analysis and potential for other missions, e.g. to Mars. In any case I’ve thoroughly enjoyed this little trip into the Apollo missions and the space race, in which I’ve learned a lot more than I’ve presented here.
A young person I know is studying psychology probably for the first time and she informed me of the stages of early childhood psychological development she has been told about – oral, anal, phallic, latency and genital. I’d certainly heard of the first two of these, but not too much of the others. A quick squiz at the lists of Dr Google led me to Freudian psychosexual theory, which naturally raised my scepical antennae. And yet, despite my limited parental experience I’ve noted that babies do like to put things in their mouths a lot (the oral stage is supposed to extend from birth to 1 -2 years), sometimes to their great detriment. So, personality-wise, is the oral stage a real thing, and does it really give way to the anal stage, etc? I’m using the oral stage here to stand for all the stages in the theory/hypothesis.
These stages were posited by Freud as central to his hypothesis of psychosexual development – though how the phallic stage is experienced by girls is an obvious question. His view was that our childhood development was a matter of fixation, at various periods, on ‘erogenous zones’. After the oral stage, children supposedly switch to an anal stage, which lasts to 3 years of age – presumably on average. These switches might be delayed, or brought on earlier, in individual cases, and sometimes an individual might get stuck at a particular stage, denoting psychosexual problems.
So how real are these stages? Are some more real than others? What is the experimental evidence for them, do they exist in other primates, and if they exist, then why? What purpose do they serve?
It seems that Freud, and perhaps also his followers, have built up a whole system around these stages and how individuals are more or less influenced by any one or a combination in the development of their adult personalities, and since the degree of influence of these different stages and the way they’ve combined in each individual is pretty well impossible to recover, the theory looks to be unfalsifiable. There also appears to be the problem that psychologists can usually only track back from the adult’s personality to speculate about early childhood influences, which looks like creating a circular argument. For example, if an individual presents as an overly trusting, dependent personality, this may be cited as evidence of fixation at the oral stage of development, because children fixated at this stage are believed to develop these personalites in later life. The only way out of this impasse it seems to me is to define this oral stage (or any other stage) more carefully, so that we can accurately identify children who have experienced a prolonged or fixated oral stage, and then return to them to observe how their personalites have developed.
Of course there are other problems with the theory. There needs to be a clearer explanation, it seems to me, of how these apparently erogenously-related stages are marked into personality traits in later life. The relationship between an obsession with putting things in your mouth, or sucking, licking or otherwise craving and enjoying oral sensations, and a dependent, trusting personality, is by no means obvious. In fact, some might go as far as to say that, prima facie, it makes about as much sense as an astrologically-based account of personality.
Perhaps if we look at the oral stage, or claims about it, more closely, we’ll find something of an explanation. In this description, we learn that the libido, or life force, gets fixated in the oral stage in more than one way, leading to an ‘oral receptive personality’ and an ‘oral aggressive personality’. The first type, which is a consequence of a delayed or overly fixated oral stage, is trusting and dependent, the second is dominating and aggressive, due largely to a curtailed oral stage, apparently. Those who experienced a longer oral stage in childhood are supposedly more likely to be smokers and nail-biters as adults, though I’m not sure how this relates to being a dependent or trusting personality.
In any case this hardly takes us further in terms of evidence, and it’s worth noting that the site in which this is mooted is described as ‘integrated sociopsychology’. Dr Steven Novella, in the most recent episode of the Skeptic’s Guide to the Universe, warned about the use of such terms as ‘integrative’, ‘functional’ and ‘holistic’ used before ‘medicine’ as a red flag indicating a probable bogus approach. I suspect the same goes for psychology. Obviously the website’s author is a Freudian, and he makes this statement as to evidence:
What is undoubtedly disturbing to the ‘Freud-bashers’ is how much evidence has accumulated over the years to say that, in broad terms at least, if not always in detail, Freud’s observations pretty much stand up so many years later.
However, other psychology sites I’ve looked at, which don’t appear to me to be particularly Freud-bashing, have pointed to the lack of evidence as the principal problem for Freud’s stages. Of course the major problem is how to test for the ‘personality effect’ of these stages. Again I think of astrology – someone dedicated to astrological causation can always account for personality ‘deviations’ in terms of cusps and conjunctions and ascendants and the like, and this would surely also be the case for the confounding influences of our various cavities and tackle, so to speak.
Some 20 years ago a paper by Fisher & Greenberg (1996) suggested that Freud’s stages and other aspects of his early childhood writings should be scientifically examined as separate hypotheses, in a sort of piecemeal fashion. Unfortunately I can find little evidence that evidence has been found for the oral stage as a marker for later personality development – or even looked for. This is probably because most scientists in the field – experimental psychologists – have little interest in these Freudian hypotheses, and little funding would be available for testing them. They would surely have to be longitudinal studies, with a host of potentially confounding factors accounted for, and the end results would hardly be likely to convince other early childhood specialists.
I’ve said the theory looks to be unfalsifiable, but I’m not quite prepared to say outright that it is. It seems to me that the oral stage, with its obvious association with breast-feeding, and the obvious association between prolonged breast-feeding and dependence, at least in popular culture, is the one most amenable to testing. The later Oedipus/Elektra complexes, associated I think with the phallic stage, seem rather too convoluted and caveat-ridden to be seriously testable. I must admit to a residual fondness for some of Freud’s theories of development though, however unscientific they might be. Though I was never interested in the strict form of the Oedipus complex, because my father was by far the weaker of my parents, I felt it offered some insight into relations with the dominant parent – struggle, rivalry, attempts to overthrow. I also agreed with his general view that early childhood is absolutely crucial to our subsequent psychological development, and I found his ego, id and superego hypotheses enlightening and fascinating. Polymorphous perversity, sublimation and the pervasive influence of libido also tickled my fancy a lot.
I think it’s fair to say that Freud has had a greater influence on popular culture than on science, but it has been a profound influence, and overall a positive one. The term ‘observations’, rather than theories, seems better to describe his contributions. In writing about the libido and the pleasure principle, inter alia, he accepted our instinctive animal nature, and gave us ideas about how to both harness it and overcome it. Notions like the id and the superego seemed to give us fresh ways to think about desire, discipline and control. His ideas and concepts tapped into stuff that was very personal to us in our individual struggles, and his universalising tendencies helped us, I think, to look sympathetically at the struggles of others. Libido itself was a banner-word that helped release us from the straight-jacket of earlier sexual thinking – or avoidance thereof.
It’s also probably unfair to expect from Freud’s pioneering work anything like the scientific riguor we expect and really need from psychology today. Certainly he was far too firm about the rightness of his most speculative work – I read The Interpretation of Dreams as an ideas-hungry teenager and was impressed with its first-half demolition of previous dream theories, but the second-half presentation of his own theory struck me even then as ludicrously weak, though it had the definitely positive effect of putting me off dream-interpreters for life (a dream that can be interpreted is a dream not worth having, and that’s their greatest gift to us). It’s more what he drew attention to that counts. His concept of the unconscious doesn’t really cut it today, but he made us start thinking of unconscious motivations in general, and much else besides. I’ve never been to an analyst, but I think one benefit of the psychoanalytic movement is to help us realise that there’s no normality and that we all carry baggage of guilt, anger, fear and frustration. For all its failings, his was a humanising enterprise.
I’ve been working desultorily on a number of blog pieces which I’m struggling to finish, partly because they’re hard work but also because the excitement and stress is building for my maiden voyage overseas, not counting my barely-brain-developed boat-trip to Australia from Southampton aged 5 – memories include a camel train on the banks of the Suez, being rescued from drowning in the ship’s pool, and being befriended by an older kid which mainly involved being chased around the decks a lot. So from this day forth I’m devoting this blog to the trip, lots of short sharp and shiny shite, around 500 words daily, though I’m unlikely to keep to that limit, seriously.
So I’m not yet packed and wondering about the Aus$ which they say is rising and that’s good for OS travel. I’ve been described – though only by one person, my travelling companion – as a Scottish mothpurse and my main stressor is definitely $$$$ – sadly I don’t have the symbol for euros on my keyboard. I think the recent rise means cheapie flights but ours was paid-for long ago. The current Aus$ buys .68 in euros and I’ve no idea whether that’s good or bad or better than it was, whenever was was. Anyhow nothing to be done so let’s change the subject to my moustache. I thought it’d be a fine frivolity to grow one for the trip, something Frenchy and chic and daft, but after about four days’ growth it’s looking more Hitler than Charles Boyer, who was too chic to sport a tache anyway, and besides I’ve never liked them. At least my hair’s grown salt’n pepper with age, and seriously short on pepper, so it’ll be prominent as frost on a silver dust bush, and a change is as good as a haircut so I’ll leave it growing for now.
I’m at the frantically seeking advice stage. Got my first-ever passport – had to become an Australian citizen, which made me feel like a fraud come congratulations time – money-belt, international connector thingy. Downloaded Skype for myself and my travelling companion (though I won’t be using it, having no friends and family), had it explained to me that Messenger through Facebook is the cheapest form of communication – would desperately love to have an extra TC, aged about 13, to keep me straight on smartphone technostuff etc. Told to wear stockings on the flight, against DVT, which I may not, and have found hopefully the right advice against aerosynusitis, aka plane brain, which had me folded over my seatbelt on a recent flight to Melbourne. Still have to photocopy my passport, do some house-cleaning and catfood-buying for my house-sitter, and other things I can’t remember. My mind’s blanking out unpredictably so I’m sure to stuff something majorly up, but my TC’s coming over tomorrow to help with the packing and share the stress.
Okay the itinerary. A 14-day cruise or thereabouts down the Danube-Main-Rhine from Budapest to Amsterdam, after which a two-night stopover and then a train to Paris for a week’s stay on the île Saint-Louis, the walls of our cosy pied-à-terre lapped by the Seine, plus ou moins. Then down the tunnel and two nights in once-swinging London, and then, hurly-burly done, back to the serenity and quiet contemplation of home. On verra.
Jacinta: So here’s a question – if vegans have pets – say a cat or a dog – do they feed them only vegetables?
Canto: I don’t know, I suppose it would depend on the vegan…
Jacinta: Shouldn’t it depend on the pet? Cats and dogs are carnivores aren’t they? So it would be a form of cruelty to deprive them of meat. Might even be murder.
Canto: We don’t extend murder to the killing of other animals.
Jacinta: Many vegans do.
Canto: Good point. I once read an article by a vegan philosopher, who gets out of those problems by declaring that using animals as pets is unethical. A form of slavery, I suppose.
Jacinta: So, we free the pets? Along with the cows, the sheep, the donkeys, the camels, the water buffalos, the horses, the chooks and pigeons and all those other creatures we’ve used and abused so horridly?
Canto: Well, from memory – I’ll never be able to hunt out the article – he didn’t address the issue of those animals already under captivity of one sort or another. He was simply wanting to argue on general principles that using animals for our personal benefit was unethical.
Jacinta: Even if it benefits the animal?
Canto: Well I suppose the argument would be that even a well-treated slave is still a slave.
Jacinta: But if you free a dog, say, what would happen to it? You’re actually throwing it out of its home, it has nowhere else to go. And I believe that there’s historical evidence that dogs, and probably cats too, have adapted to live with humans. That it was their choice, in a sense. Like pigeons in the city getting fat on leftover bits of hamburger, with no obvious ill-effects. Do pigeons get diabetes?
Canto: Well there’s an obvious difference between scavenging pigeons and pets. Pets don’t choose to become pets. I think that’s the way the argument would run. Unfortunately there are a lot of current pets who would suffer from being set free, but that’s not the issue.
Jacinta: I think I see. We look after the pets we’ve got, then bury them and don’t have any more. And this wouldn’t mean the end of all dogs because there are plenty of strays – scavengers – to maintain the species. And no more enforced ‘pedigree’ breeding – I’d be all for that. But there’s a problem – in order to get rid of all the pets, you have to stop them breeding and that would mean desexing them – a gross interference of their right to reproduce. And if you allow them to reproduce, you must surely bear responsibility for their offspring as your home is theirs. You’re caught in a trap, you can’t walk out, because you love them babies too much.
Canto: You’re looking at it all from a practical perspective, which is all fine and good and relevant, but I think the issue for this philosopher was, I think – judging from him being a vegan – that all such usage of animals – pets as cuddly toys, dolphins as trained performers, horses and camels as pack animals, etc, not to mention farming them for slaughter – is unethical. What do you think of that as a general principle?
Jacinta: I don’t think it holds up, because species take advantage of other species all the time, and not just by preying on them. Sharks have their remoras, we have lice more or less specially adapted to us, roses have their aphids, in fact everywhere you look you have species making use of other species. And presumably being a vegan he marks a strict boundary between animal and vegetable and in reality that’s quite a fuzzy boundary, like with coral. And what about insects, what’s the vegan take on that?
Canto: Presumably negative – they have eyes and antennae and feelings of some sort.
Jacinta: Yes, well it’s a step too far I think. Yes we have a moral responsibility to avoid causing undue suffering….
Canto: Well what about this argument. Because we can survive – and indeed thrive – on only plants, we should do so. I mean, you’re talking about species that, say, are mostly carnivorous – that won’t survive if their food supply dries up. Sharks, for example, they can’t just become vegan, they’ve adapted to a very specific diet. We on the other hand are omnivores, we can dispense with certain varieties of food, including meat, and still live healthy lives, perhaps.
Jacinta: Hmmm, that’s definitely a more difficult question. I do believe that being omnivores, or being very adaptable in our diet has stood us in very good stead in the past, like in the last major ice age when we almost died out apparently. So I’m wondering whether confining our diet might not expose us to greater risks…
Canto: It may not even mean confining our diet – we could synthesise many of the proteins and other nutrients we nowadays get from meat. We’ve already done that, probably.
Jacinta: Well I’ve heard they’re still a long way from synthesising anything that really has the nutrients as well as the texture, flavour, odour and je ne sais quoi of meat. At under about $200, 000.
Canto: And if they achieved that feat, and got it down to competitive prices, would you go vego?
Jacinta: Well of course – I’d have no reason not to. I just don’t think it’ll happen in my lifetime.
Canto: But let’s say for argument’s sake that it does – would you feed this synthetic stuff to your pet cat?
Jacinta: Ah so we come full circle. Yes I would, since it would be more or less chemically identical to meat.
Canto: But animals that have adapted to become carnivores have also adapted to become hunters. They go together. Haven’t you turned your cat from its proper course in life?
Jacinta: No, she became removed from her ‘proper course’, if there is such a thing, by becoming my pet, whether by her choice or mine, or the choice of her ancestors. Likely she will keep up her hunting skills, catching flies and insects and mice and small birds, if she can. And she will benefit from being my friend, as I will benefit from being hers. Like all good friends, we’ll use each other for own purposes, which we hope will be, and will try to make, mutually beneficial.
Canto: Okay, no further questions your excellency.
So as I approach my sixtieth year I’m in a mood to reflect on my largely wasted, dilettantish life (at least seen from a certain perspective… ).
It seems to me that my two older siblings and I were largely the products of benign neglect, if that’s not too unfair to my parents, who seemed largely pre-occupied with their – highly dysfunctional – relationship with each other. Anyway this neglect had its advantages and disadvantages, and it was offset by at least one key decision of my mother (by far the dominant parent). She had us taken to the local library once a fortnight to borrow books, and there were always books aplenty in the house, including at least two sets of encyclopaedias. So from the age of six or seven until I left home, the local libraries became a haven.
From almost the beginning though I felt a difference between learning, which was a thrill, and school, which I suffered in silence. My first strong memory of school comes from grade one, when I was five or six. My teacher asked me to read from our class reader and I had to tell her that I’d forgotten to bring it from home. She blew up at me. ‘You’ve forgotten it again! What’s the matter with you? How many times have I told you,’ etc etc. I was extremely humiliated. I was learning that I was vague, forgetful, disorganised, and it was all too true. Shortly after this, I arrived at school and discovered I’d forgotten my reader again. I was so scared I hid in the bushes until break time, when I rejoined the class unnoticed, apparently (though probably not). I remember the sense of being defiant and tricksterish.
It’s funny that I’m now a teacher who checks students’ homework and has to admonish those who don’t do it, because as a kid in primary school and later in high school, when the issue loomed much larger, I never did any homework. Not once, ever. I even got caned for it in high school. And suffered endless screaming fits from my mother when the matter was reported back to her. I remember many sleepless nights fretting about how to survive the next day’s questioning, but still I was unable or unwilling to comply. I spent a lot of my school days staring out the window, daydreaming of freedom. One day I watched a tiny bird – a hummingbird, I thought, but we have no hummingbirds in Australia – hovering a bit driftily above some bushes, for ages and ages. What an ability, what a perspective it had! And yet it felt constrained to hover there. Maybe only humans could free themselves from these ‘natural’ constraints.
I concocted an idea for a novel, which I confided to my sister, of schoolkids rising up and throwing out the teachers, establishing an ‘independent state’ school – an idea I probably took from Animal Farm. She was very enthusiastic, probing me on the details, assuring me it would be a best-seller, I would become famous. I became briefly obsessed with contemplating and planning the takeover – the secret meetings, the charismatic leader, the precisely organised tactics, the shock and dismay of our former masters, the nationwide reaction – but of course I soon stumbled over the outcome. Surely not Animal Farm again?
I learned over time that Elizabeth, our town, was the most working-class electorate in South Australia, with the largest percentage of labor voters in the state, and possibly even the country. Of course, one had to take pride in being the biggest or the most of anything, but what did it mean to be working-class? Was it a good or a bad thing? Was our family more or less working-class than our neighbours? I was discovering that interesting questions led to more questions, rather than to answers. That, as Milan Kundera wrote, the best questions didn’t have answers, or at least not final ones. Of course, the provisional answer seemed to be that it wasn’t good to be working class, or middle class, or upper class, but to move beyond such limitations. But I was learning, through my library reading, which increasingly consisted of Victorian English literature for some reason, that class wasn’t so easy to transcend.
I continued to struggle as my schooling moved towards the pointy end. Classmates were dropping out, working in factories, getting their first cars. I was wagging school a lot, avoiding the house, sleeping rough, drinking. My older brother started an economics degree at university, probably the first person in the history of my parents’ families to do so as the prospect of university education was opened up to the great unwashed, but I was unlikely to be the second. I recall wagging it one afternoon, walking to the end of my street, where the city of Elizabeth came to an abrupt end, and wandering through the fields and among the glasshouses of the Italian marketers, armed with my brother’s hefty economics textbook, and getting quite excited over the mysteries of supply and demand.
And so it went – I left school, worked in a factory here, a factory there, went on the dole, worked in an office for a while, got laid off, another factory, moved to the city, shared houses with art students, philosophy students, mathematics nerds (whom I loved), wrote volumes of journals, tried to write stories, ritually burned my writings, read philosophy, had regular bull sessions about all the really interesting things that young people obsess about and so on and on. And I haven’t even mentioned sex.
I’d always been hopelessly shy with the opposite sex and wrote myself off as eternally poor and inadequate, but I loved girls and fantasised endlessly. I felt guilty about it, not because I thought it immoral – I never had any moral qualms about sex, which made it all the more easy to dismiss religions, which all seemed to be obsessed with regulating or suppressing it. I felt guilty because sexual daydreaming always seemed the lazy option. I was like Proust’s Swann, I would tire easily from thinking too much, especially as those great questions never had any easy or final answers. So I would give up and indulge my fantasies, and even the occasional unrequited or unrealistic passion for real female acquaintance. I remember hearing of a celebrated mathematician who would wander homeless around the USA I think it was, couchsurfing at the homes of mathematical colleagues male and female, inspiring them to collaborate with him on mathematical papers, so that he held a record for the most papers published in peer-reviewed journals. An attractive female colleague laughed at the idea of an affair with him, because apparently everyone knew he was entirely asexual, had never been heard to even mention sex in his life… Could this be true, I wondered, and if so, how could I create for myself a brain like his? It seemed to me that Aristotle was right, the pleasure derived from certain types of contemplation was greater than sexual pleasure (though dog knows I’d hate to forgo sex). I’d experienced this myself, grappling with something in Wittgenstein, reading a passage over and over until an insight hit me and set me pacing around my bedroom all night long talking to myself. But maybe it was all bullshit.
So now to get to the heart of the matter – pourquoi science? As a youngster I read novels, and sometime works of history – one of my first big adult books was a very good biography of Richard III, which I read at 14, and which came flooding back when Richard’s body was miraculously discovered recently. But I never read science. At school I quickly lost track of physics and mathematics, while always being vaguely aware of how fundamental they were. Through philosophy in my early twenties I started to regain an interest, but generally I’d resigned myself to being on the arts side of the great divide.
One book, or one passage in a book, changed this. The book was Der Zauberberg, or The Magic Mountain, by Thomas Mann, which I read in 1981. This was the story of Hans Castorp, a young man in his mid-twenties, as I was when I read it. As a tubercular patient, he was sent to a sanitarium in the Alps for a period of enforced idleness, where he encountered a number of more or less interesting characters and was encouraged to grapple with some more or less interesting ideas. Wrapped up on his loggia, he was reading some books on fundamental science, and fell into contemplation, and in a passage of some fifteen pages he asked himself two fundamental questions, both of which branched off into a whole series of sub-questions (or so I remember it). They were: What is life? and What is matter? And there was something about the way Mann animated this Castorp character, as ordinary a fellow as myself, and made me identify with his questioning and his profound wonder. It just flipped a switch in me. These were the questions. They could easily fill several lifetimes. No reason ever to be bored again.
I immediately went out and bought my first ever science magazine, Scientific American, and throughout the eighties I bought each monthly issue and read it cover to cover, not always understanding it all of course, but gradually building up a general knowledge. Later I switched to New Scientist, and nowadays I read the fine Australian magazine Cosmos, as well as listening to science podcasts and reading the odd blog. I’m far from being a scientist, and I’ll never have more than a passing knowledge – but then, that’s all that even the most brilliant scientist can hope for, as Einstein well knew.
But here’s the thing – and I’ll expand on this in my next post. It’s not science that’s interesting – science is just a collection of tools. What’s interesting is the world. Or the universe, or everything. It’s the curiosity, and the questions, and the astonishing answers that raise so many more questions. For example – what is matter? Our investigations into this question have revealed that we know bugger all abut the stuff. And when we were young, as a species, we thought we knew it all!
Next time, I’ll focus more deeply on science itself, its meaning and its detractors.
“bashful, insolent; chaste, lustful; prating, silent; laborious, delicate; ingenious, heavy; melancholic, pleasant; lying, true; knowing, ignorant; liberal, covetous, and prodigal”
Michel de Montaigne, ‘Myself’
Sitting at my computer with the ABC’s ‘Rage’ on in the background, when on came a video by an artist who’s taken the moniker ‘Montaigne’, and how could I not be attracted? Good luck to her. I first stumbled on the original Montaigne decades ago, and like thousands before and since, I was fairly blown away. He’s been an inspiration and a touchstone ever since, and to think I’m now approaching his age at his death. One thing he wrote has always stayed with me, and I’ll misquote in the Montaignian tradition, being more concerned with the idea than the actual words – something like ‘I write not to learn about myself, but to create myself’. This raises the importance of writing, of written language, to an almost ridiculous degree, and I feel it in myself, as I’ve sacrificed much to my writing, such as it is. Certainly relationships, friendships, career – but I was always bad at those. All I have to show for it is a body of work, much of it lost, certainly before the blogosphere came along, the blogosphere that retains everything, for better or worse.
The New Yorker captures the appeal of Montaigne well. He wasn’t an autobiographical writer, in that he didn’t dwell on the details of his own life, but as a skeptic who trusted little beyond his own thoughts, he provided a fascinating insight into a liberal and wide-ranging thinker of an earlier era, and he liberated the minds of those who came later and were inspired by his example, including moi, some 400 years on. So, I’d like to make my writings a bit more Montaignian in future (I’ve been thinking about it for a while).
I’ve been focussing mainly on science heretofore, but there are hundreds of bloggers better qualified to write about science than me. My excuse, now and in the future, is that I’m keen to educate myself, and science will continue to play a major part, as I’m a thorough-going materialist and endlessly interested in our expanding technological achievements and our increasing knowledge. But I want to be a little more random in my focus, to reflect on implications, trends, and my experience of being in this rapidly changing world. We’ll see how it pans out.
Reading the celebrated biography of Charles Darwin by Adrian Desmond and James Moore, I was intrigued by some remarks in a letter to his cousin and friend, William Darwin Fox, referring to the ‘paradise’ of Fanny and Sarah Owen’s bedrooms. This was 1828, and the 19-year-old Darwin, already an avid and accomplished beetle collector and on his way to becoming a self-made naturalist, was contemplating ‘divinity’ studies at Cambridge, having flunked out of medicine in Edinburgh. Fanny was his girlfriend at the time. These bedrooms were
‘a paradise… about which, like any good Mussulman I am always thinking… (only here) the black-eyed Houris… do not merely exist in Mahomets noddle, but are real substantial flesh and blood.’
It’s not so much the sensual avidity shown by the 19-year-old that intrigues me here, but the religious attitude (and the fascinating reference to Islam). For someone about to embark on a godly career – though with the definite intention of using it to further his passion for naturalism – such a cavalier treatment of religion, albeit the wrong one, as ‘inside the noddle’, is quite revealing. But then Darwin’s immediate family, or the males at least, were all quasi-freethinkers, unlike his Wedgewood cousins. Darwin never took the idea of Holy Orders seriously.
I’m writing this because of some remarks made in the workplace which – well, let’s just say they set my sceptical antennae working overtime. They were claims made about the bubonic plague, of all things.
Bubonic plague, dubbed the Black Death throughout European history, is a zoonotic disease, which means it spreads from species to species – in this case from rodents to humans via fleas. Actually there are three types of ‘black death’ plagues, all caused by the enterobacterium Yersinia pestis, the others being the septicemic plague and the pneumonic plague. Other zoonotic diseases include ebola and influenza. Flea-borne infections generally attack the lymphatic system, as does bubonic plague. The term ‘bubonic’ comes from Greek, meaning groin, and the most well-known symptom of the disease were ‘buboes’, grotesque swellings of the glands in the groin and armpit.
It wasn’t called the Black Death for nothing (the blackness was necrotising flesh). It’s estimated that half the European population was wiped out by it in the 14th century. If untreated, up to two-thirds of those infected will be dead within four days. With modern antibiotic treatments, the mortality rate is of course greatly reduced. The broad-based antibiotic, streptomycin has proved very effective. Of course treatment should be immediate if possible, and prophylactic antibiotics should be given to anyone in contact with the infected.
The plague is first known to have stuck Europe in the sixth century, at the time of Justinian. The Emperor actually caught the disease but recovered after treatment. It’s believed that the death toll was very high, but little detail has been recorded. The fourteenth century outbreak appears to have originated in Mongolia, from where it spread through Mongol incursions into the Crimea. An estimated 25 million died in this outbreak from 1347 to 1352. More limited outbreaks occurred in later centuries, and the last serious occurrences in Europe were in Marseille in 1720, Messina (Sicily) in 1743, and Moscow in 1770. However it emerged again in Asia in the nineteenth century. Limited for some time to south-west China, it slowly spread from Hong-Kong to India, where it killed millions of people in the early twentieth century. Infected rats were inadvertently transported to other countries by trading vessels, resulting in outbreaks in Hawaii and Australia. By 1959, when worldwide casualties dropped to under 200 annually, the World Health Organisation was able to declare the disease under control, but there was another outbreak in India in 1994, causing widespread panic and over 50 deaths.
So that’s a v brief history of the rise and fall of bubonic plague, but I’m interested in looking at early treatments and the discovery of its cause. For the fact is that, even in 1900, when the plague first came to Australia, there was no clear consensus among the experts as to its means of transmission, with many believing that it was as a result of contact with the infected. However, a growing body of evidence was showing a connection with epizootic infection in rats, and as it happened, work done by Australian bacteriologists Frank Tidswell, William Armstrong and Robert Dick, working for a new public health department in Sydney under Chief Medical Officer John Ashburton Thompson, established as a direct result of the plague outbreaks in Sydney from 1900 to 1925, contributed substantially to the modern understanding of Yersinia pestis and its spread from rats to humans. This Australian work was another step forward in the germ theory of disease, first suggested by the French physician Nicolas Andry in 1700, and built upon by many experimental and speculative savants over the next 150 years. The great practical success of John Snow’s work on cholera, followed by the researches of Louis Pasteur and Robert Koch, established the theory as mainstream science, but zoonotic infections, especially indirect ones where the infection passes from one species to another by means of a vector, have always been tricky to work out.
In fact it was in Hong Kong that the Yersinia pestis bacterium was identified as the culprit. A breakout of plague occurred there in the 1890s, and Alexandre Yersin, a bacteriologist who had worked under both Pasteur and Bloch, was invited to research the disease. He identified the bacterium in June 1894, at about the same time as a Japanese researcher, Kitasato Shibasaburo. The cognoscenti recognise that both men should share the honour of discovery.
What is fascinating, though, is that the spread of plague from Asia in the 1890s to various ports of the world in the earlier 20th century was very different from the spread of earlier pandemics. Did this have anything to do with science or human practices? Well, what follows is drawn from by far the most comprehensive analysis of the disease I’ve found online, Samuel Cohn’s ‘Epidemiology of the Black Death and successive waves of plague’, in the Cambridge Journal of Medical History.
Cohn’s research and analysis casts credible doubt on the whole plague story, specifically the assumption that we’re dealing with one disease, from the sixth century through to modern outbreaks. He recounts the standard story of three separate pandemics, in the sixth century with a number of recurrences, ditto in the fourteenth century, and in the nineteenth. However, the epidemiology of the most recent pandemic, definitely attributed to Y Pestis and its carrier the Oriental rat flea, Xenopsylla cheopis, is substantially different from that of pandemics one and two, a fact which, according to Cohn, has been obscured by inaccurate analysis of the records. Cohn’s own analysis, it must be said, is fulsome, with 30 pages of references in a 68-page online essay. He doesn’t have a solution as to what caused the earlier pandemics, but he asks some cogent questions. For my own understanding’s sake, I’ll try to summarise the issues in sections.
speed of transmission
Pandemic 3, if we can call it that, was a much slower mover than the previous two. It seems to have sprung up in China’s Yunnan province from where it reached Hong Kong in 1894. It was noted in the early 20th century that Y pestis was travelling overland at a speed of only 12 to 15 kilometres a year. This can be explained by the fact that Y pestis is a disease mainly of rats, though other rodents can also be infected, and rats don’t move far from their home territories. At this rate pandemic 3, even in a world of railways, cars, and dense human populations, would have taken some 25 years to cover the distance that pandemic 1 covered in 3 months. Pandemic 1 made its first appearance in an Egyptian port in 541 and quickly spread around the Mediterranean from Iberia to Anatolia. Within two years of first occurrence it had reached to the wastelands of Ireland and eastern Persia. Pandemic 2, believed to have originated in India, China or the Russian steppes, made its first European appearance in Messina, Sicily in 1347. Within three years it had impacted most of continental Europe, and had even reached Greenland. The fastest overland travel recorded for plague occurred in 664 (pandemic 1), when it took only ninety-one days to travel 385 kilometres from Dover to Lastingham (4.23 km a day)— far faster than anything seen from Y pestis since its discovery in 1894. Pandemic 2’s speed was similar, as Cohn details it:
like the early medieval plague, the “second pandemic” was a fast mover, travelling in places almost as quickly per diem as modern plague spreads per annum. George Christakos and his co-researchers have recently employed sophisticated stochastic and mapping tools to calculate the varying speeds of dissemination and areas afflicted by the Black Death, 1347–51, through different parts of Europe at different seasons. They have compared these results to the overland transmission speeds of the twentieth-century bubonic plague and have found that the Black Death travelled at 1.5 to 6 kilometres per day—much faster than any spread of Yersinia pestis in the twentieth century. The area of Europe covered over time by the Black Death in the five years 1347 to 1351 was even more impressive. Christakos and his colleagues maintain that no human epidemic has ever shown such a propensity to cover space so swiftly (even including the 1918 influenza epidemic). By contrast to the spread of plague in the late nineteenth and twentieth centuries the difference is colossal: while the area of Europe covered by the Black Death was to the 4th power of time between 1347 and 1351, that of the bubonic plague in India between 1897 and 1907 was to the 2nd power of time, a difference of two orders of magnitude.
All of which raises the question – why was pandemic 3 so much slower than the others? Could it be that Y pestis wasn’t the cause of the earlier pandemics?
mode of transmission
We know that Y pestis is a disease of rats, and we know that the Black Death was all about rats, so that’s an obvious connection, no? Well, according to Cohn, what we think we know is just wrong. ‘… no scholar has found any evidence, archaeological or narrative, of a mass death of rodents that preceded or accompanied any wave of plague from the first or second pandemic.’ I must say I found this incredible when I first read it, yet Cohn seems to have investigated the sources thoroughly.
Cohn notes that:
while plague doctors of “the third pandemic” discovered to their surprise that the bubonic plague of the late nineteenth and twentieth centuries was rarely contagious, contemporaries of the first suggest a highly contagious person-to-person disease. Procopius, Evagrius, John of Ephesus, and Gregory of Tours characterized the disease as contagious and, in keeping with this trait, described it as clustering tightly within households and families; the evidence from burial sites supports their claims.
Pandemic 2 made the word contagium popular among the general public, and the incredible speed of transmission became one of the principle signs of the Black Death, differentiating it, for example, from smallpox, which had some similar physical characteristics. This contagion suggests person to person contact, more typical of pneumonic plague, which is highly infectious and can be transmitted through coughing and sneezing. A later chronicler of pandemic 2, Richard Mead, writing in the 1700s, advised against crowding plague sufferers in hospitals, as it ‘will promote and spread the Contagion’. However, those treating pandemic 3 noted, to their surprise, that plague wards were the safest places to be, and that this particular plague rarely took on the pneumonic form.
Cohn notes that the earlier pandemics were often associated with famine. For example in Alexandria and Constantinople in 618 and 619 famine preceded the plague and appeared to spark it into life. However, pandemic 3, definitely caused by Y Pestis, tended not to thrive in situations of dearth and was instead fed by increased yields. Such yields lead to higher rat populations, and higher rates of possibly infected rat fleas and so higher rates of transmission to humans.
According to contemporary accounts the first pandemic wiped out entire regions, decimating the inhabitants of cities and the countryside through which it so swiftly passed. These accounts are backed up by archaeological and other evidence. It’s pretty clear that millions died in the second pandemic too. Compare this to the third pandemic, which spread so slowly and was limited to coastal areas and even just shipping docks. Restricted to temperate zones, this last pandemic resulted in deaths in the hundreds, with never more than 3% of an affected population dying.
Although few quantitative records describe the signs or symptoms of plague for pandemic one, those that do (and Cohn cites 6 different ancient authors) are in general agreement in their descriptions of ‘swellings in the groin, armpits, or on the neck just below the ear’, the classic symptoms of bubonic plague. Procopius of Caesaria also observed that victims’ bodies were covered in black pustules or lenticulae. Pandemic 2, which begins with the Black Death of 1347-52, is marked, on the other hand, by extensive records, both professional and popular – writings about it were amongst the first forms of popular literature.
range and seasonality
Another problem for the view that this has all been the doing of Y pestis, is that pandemics 1 and 2 could strike all year round, but generally settled into a pattern of prevailing in summer in the southern Mediterranean and the Near East, which is not the best season for the flea vector X cheopis. The seasonal cycle of modern plague is quite different, and the range is much more limited.
So all this opens up a mystery. Scientists are agreed that we don’t have a clear-cut story of Y pestis causing horrific disease through rats and fleas over millennia (archaeological and other evidence suggests that rats were scarce in 14th century Europe) , but they’re much in disagreement about what the real story might be. If not Y pestis, then maybe a hemorrhagic virus (one of which causes ebola). Such viruses are notorious for their rapid transmission, their resurgences and their high mortality rates. Pneumonic plague, the more infectious, lung-infecting form of plague may also be implicated, but this doesn’t appear to agree with most of the described symptoms of pandemics 1 and 2. Other types of fleas, not associated with rats, as well as lice, are also being considered as possible vectors. Some geneticists believe that a variant of Y pestis may have been responsible. It looks as if genetic analysis is the most likely pathway to finding a solution.
This article got started, as I wrote at the beginning, because someone keen on naturopathy said something about bubonic plague in our staff room. Some plant she brought in, which had great anti-oxidant properties (she clearly hasn’t kept up with the latest findings on anti-oxidants) was also a cure for bubonic plague, or maybe it was a variant of the plant, and the person who discovered the secret of its healing properties died suddenly (presumably not from plague) and the secret was lost to us for centuries…