the new ussr illustrated

welcome to the Urbane Society for Skeptical Romantics, where pretentiousness is as common as muck

Archive for the ‘science’ Category

On electrickery, part 1 – the discovery of electrons

leave a comment »

Canto: This could be the first of a thousand-odd parts, because speaking for myself it will take me several lifetimes to get my head around this stuff, which is as basic as can be. Matter and charge and why is it so and all that.

Jacinta: so let’s start at random and go in any direction we like.

Canto: Great plan. Do you know what a cathode ray is?

Jacinta: No. I’ve heard of cathodes and anodes, which are positive and negative terminals of batteries and such, but I can’t recall which is which.

Canto: Don’t panic, Positive is Anode, Negative ICathode. Though I’ve read somewhere that the reverse can sometimes be true. The essential thing is they’re polar opposites.

Jacinta: Good, so a cathode ray is some kind of negative ray? Of electrons?

Canto: A cathode ray is defined as a beam of electrons emitted from the cathode of a high-vacuum tube.

Jacinta: That’s a pretty shitty definition, why would a tube, vacuum or otherwise, have a cathode in it? And what kind of tube? Rubber, plastic, cardboard?

Canto: Well let’s not get too picky. I’m talking about a cathode ray tube. It’s a sealed tube, obviously, made of glass, and evacuated as far as possible. Sciencey types have been playing around with vacuums since the mid seventeenth century – basically since the vacuum pump was invented in 1654, and electrical experiments in the nineteenth century, with vacuum tubes fitted with cathodes and anodes, led to the discovery of the electron by J J Thomson in 1897.

Jacinta: So what do you mean by a beam of electrons and how is it emitted, and can you give more detail on the cathode, and is there an anode involved? Are there such things as anode rays?

Canto: I’ll get there. Early experiments found that electrostatic sparks travelled further through a near vacuum than through normal air, which raised the question of whether you could get a ‘charge’, or a current, to travel between two relatively distant points in an airless tube. That’s to say, between a cathode and an anode, or two electrodes of opposite polarity. The cathode is of a conducting material such as copper, and yes there’s an anode at the other end – I’m talking about the early forms, because in modern times it starts to get very complicated. Faraday in the 1830s noted a light arc could be created between the two electrodes, and later Heinrich Geissler, who invented a better vacuum, was able to get the whole tube to glow – an early form of ‘neon light’. They used an induction coil, an early form of transformer, to create high voltages. They’re still used in ignition systems today, as part of the infernal combustion engine

Jacinta: So do you want to explain what a transformer is in more detail? I’ve certainly heard of them. They ‘create high voltages’ you say. Qu’est-ce que ça veux dire?

Canto: Do you want me to explain an induction coil, a transformer, or both?

Jacinta: Well, since we’re talking about the 19th century, explain an induction coil.

Canto: Search for it on google images. It consists of a magnetic iron core, round which are wound two coils of insulated copper, a primary and secondary winding. The primary is of coarse wire, wound round a few times. The secondary is of much finer wire, wound many many more times. Now as I’ve said, it’s basically a transformer, and I don’t know what a transformer is, but I’m hoping to find out soon. Its purpose is to ‘produce high-voltage pulses from a low-voltage direct current (DC) supply’, according to Wikipedia.

Jacinta: All of this’ll come clear in the end, right?

Canto: I’m hoping so. When a current – presumably from that low-volage DC supply – is passed through the primary, a magnetic field is created.

Jacinta: Ahh, electromagnetism…

Canto: And since the secondary shares the core, the magnetic field is also shared. Here’s how Wikipedia describes it, and I think we’ll need to do further reading or video-watching to get it clear in our heads:

The primary behaves as an inductor, storing energy in the associated magnetic field. When the primary current is suddenly interrupted, the magnetic field rapidly collapses. This causes a high voltage pulse to be developed across the secondary terminals through electromagnetic induction. Because of the large number of turns in the secondary coil, the secondary voltage pulse is typically many thousands of volts. This voltage is often sufficient to cause an electric spark, to jump across an air gap (G) separating the secondary’s output terminals. For this reason, induction coils were called spark coils.

Jacinta: Okay, so much for an induction coil, to which we shall no doubt return, as well as to inductors and electromagnetic radiation. Let’s go back to the cathode ray tube and the discovery of the electron.

Canto: No, I need to continue this, as I’m hoping it’ll help us when we come to explaining transformers. Maybe. A key component of the induction coil was/is the interruptor. To have the coil functioning continuously, you have to repeatedly connect and disconnect the DC current. So a magnetically activated device called an interruptor or a break is mounted beside the iron core. It has an armature mechanism which is attracted by the increasing magnetic field created by the DC current. It moves towards the core, disconnecting the current, the magnetic field collapses, creating a spark, and the armature springs back to its original position. The current is reconnected and the process is repeated, cycling through many times per second.

A Crookes tube showing green fluorescence. The shadow of the metal cross on the glass showed that electrons travelled in straight lines

Jacinta: Right so now I’ll take us back to the cathode ray tube, starting with the Crookes tube, developed around 1870. When we’re talking about cathode rays, they’re just electron beams. But they certainly didn’t know that in the 1870s. The Crookes tube, simply a partially evacuated glass tube with cathode and anode at either end, was what Rontgen used to discover X-rays.

Canto: What are X-rays?

Jacinta: Electromagnetic radiation within a specific range of wavelengths. So the Crookes tube was an instrument for exploring the properties of these cathode rays. They applied a high DC voltage to the tube, via an induction coil, which ionised the small amount of air left in the tube – that’s to say it accelerated the motions of the small number of ions and free electrons, creating greater ionisation.

x-rays and the electromagnetic spectrum, taken from an article on the Chandra X-ray observatory

Canto: A rapid multiplication effect called a Townsend discharge.

Jacinta: An effect which can be analysed mathematically. The first ionisation event produces an ion pair, accelerating the positive ion towards the cathode and the freed electron toward the anode. Given a sufficiently strong electric field, the electron will have enough energy to free another electron in the next collision. The  two freed electrons will in turn free electrons, and so on, with the collisions and freed electrons growing exponentially, though the growth has a limit, called the Raether limit. But all of that was worked out much later. In the days of Crookes tubes, atoms were the smallest particles known, though they really only hypothesised, particularly through the work of the chemist John Dalton in the early nineteenth century. And of course they were thought to be indivisible, as the name implies.

Canto: We had no way of ‘seeing’ atoms in those days, and cathode rays themselves were invisible. What experimenters saw was a fluorescence, because many of the highly energised electrons, though aiming for the anode, would fly past, strike the back of the glass tube, where they excited orbital electrons to glow at higher energies. Experimenters were able to enhance this fluorescence through, for example, painting the inside walls of the tube with zinc sulphide.

Jacinta: So the point is, though electrical experiments had been carried out since the days of Benjamin Franklin in the mid-eighteenth century, and before, nobody knew how an electric current was transmitted. Without going into much detail, some thought they were carried by particles (like radiant atoms), others thought they were waves. J J Thomson, an outstanding theoretical and mathematical physicist, who had already done significant work on the particulate nature of matter, turned his attention to cathode rays and found that their velocity indicated a much lighter ‘element’ than the lightest element known, hydrogen. He also found that their velocity was uniform with respect to the current applied to them, regardless of the (atomic) nature of the gas being ionised. His experiments suggested that these ‘corpuscles’, as they were initially called, were 1000 times lighter than  hydrogen atoms. His work was clearly very important in the development of atomic theory – which in large measure he initiated – and he developed his own ‘plum pudding’ theory of atomic structure.

Canto: So that was all very interesting – next time we’ll have a look at electricity from another angle, shall we?

 

Advertisements

Written by stewart henderson

October 1, 2017 at 8:14 pm

stand-alone solar: an off-grid solution for Australia’s remote regions (plus a bit of a rant)

with 2 comments

According to this article, Australia is leading the world in per capita uptake of rooftop solar, though currently South Australia is lagging behind, in spite of a lot of clean energy action from our government. The Clean Energy Regulator has recently released figures showing that 23% of Australians have installed rooftop solar in the last ten years, and this take-up is set to continue in spite of the notable lack of encouragement from the feds. South Australia is still making plenty of waves re clean energy, though, as it is continually lowering its record for minimum grid demand, through the use of solar PV. The record set a couple of days ago, interestingly on Sunday afternoon rather than in the middle of the night, was 587MW, almost 200MW less than the previous record set only a week or so before. Clearly this trend is set to continue.

It’s hard for me to get my head around what’s happening re disruptive technologies, microgrids, stand-alone solar, EVs, battery research and the like, not to mention the horribly complex economics around these developments, but the sense of excitement brought about by comprehensive change makes me ever-willing to try. Only this morning I heard a story of six farming households described as being ‘on the fringe of Western Australia’s power network’ who’ve successfully trialled stand-alone solar panels (powered by lithium-ion batteries) on their properties, after years of outages and ‘voltage spikes’*. The panels – and this is the fascinating part – were offered free by Western Power (WA’s government-owned energy utility), who were looking for a cheaper alternative to the cost of replacing ageing infrastructure. The high costs of connecting remote farms to the grid make off-grid power systems a viable alternative, which raises issues about that viability elsewhere given the decreasing costs of solar PV, which can maintain electricity during power outages, as one Ravensthorpe family, part of the trial, discovered in January this year. The region, 500 kilometres south of Perth, experienced heavy rain and flooding which caused power failures, but the solar systems were unaffected. All in all, the trial has ‘exceeded expectations’, according to this ABC report.

All this has exciting implications for the future, but there are immediate problems. Though Western Power would like to sign off on the trial as an overwhelming success, and to apply this solution to other communities in the area (3,000 potential sites have been pinpointed), current regulation prevents this, as it only allows Western Power to distribute energy, not to generate it, as its solar installations are judged as doing. Another instance of regulations not keeping up with changing circumstances and solutions. Western Power has no alternative but to extend the trial period until the legislation catches up (assuming it does). But it would surely be a mistake not to change the law asap:

“You’d be talking about a saving of about $300 million in terms of current cost of investment and cost of ongoing maintenance of distribution line against the cost of the stand-alone power system,” Mr Chalkley [Western Power CEO] said.

Just as a side issue, it’s interesting that our PM Malcolm Turnbull, whose government seems on the whole to be avoiding any mention of clean energy these days, has had solar panels on his harbourside mansion in Point Piper, Sydney, for years. He now has an upgraded 14 kW rooftop solar array and a 14kWh battery storage system installed there, and, according to a recent interview he did on radio 3AW, he doesn’t draw any electricity from the grid, in spite of using a lot of electricity for security as Prime Minister. Solar PV plus battery, I’m learning, equals a distributed solar system. The chief of AEMO (the Australian Energy Market Operator), Audrey Zibelman, recently stated that distributed rooftop solar is on its way to making up 30 to 40% of our energy generation mix, and that it could be used as a resource to replace baseload, as currently provided by coal and gas stations (I shall write about baseload power issues, for my own instruction, in the near future).

Of course Turnbull isn’t exactly spruiking the benefits of renewable energy, having struck a Faustian bargain with his conservative colleagues in order to maintain his prestigious position as PM. We can only hope for a change of government to have any hope of a national approach to the inevitable energy transition, and even then it’ll be a hard road to hoe. Meanwhile, Tony Abbott, Turnbull’s arch-conservative bête noir, continues to represent the dark side. How did this imbecilic creature ever get to be our Prime Minister? Has he ever shown any signs of scientific literacy? Again I would urge extreme vetting of all candidates for political office, here and elsewhere, based on a stringent scientific literacy test. Imagine the political shite that would be flushed down the drain with that one. Abbott, you’ll notice, always talks of climate change and renewable energy in religious terms, as a modern religion. That’s because religion is his principal obsession. He can’t talk about it in scientific terms, because he doesn’t know any. Unfortunately, these politicians are rarely challenged by journalists, and are often free to choose friendly journalists who never challenge their laughable remarks. It’s a bit of a fucked-up system.

Meanwhile the ‘green religionists’, such as the Chinese and Indian governments, and the German and Scandinavian governments, and Elon Musk and those who invest in his companies, and the researchers and scientists who continue to improve solar PV, wind turbine and battery technology, including flow batteries, supercapacitors and so much more, are improving their developments and disrupting traditional ways of providing energy, and will continue to do so, in spite of name-calling from the fringes (to whom they’re largely deaf, due to the huge level of support from their supporters). It really is an exciting time not to be a dinosaur.

 

Written by stewart henderson

September 20, 2017 at 9:32 pm

capacitors, supercapacitors and electric vehicles

leave a comment »

from the video ‘what are supercapacitors’

Jacinta: New developments in battery and capacitor technology are enough to make any newbie’s head spin.

Canto: So what’s a supercapacitor? Apart from being a super capacitor?

Jacinta: I don’t know but I need to find out fast because supercapacitors are about to be eclipsed by a new technology developed in Great Britain which they estimate as being   ‘between 1,000 and 10,000-times more effective than current supercapacitors’.

Canto: Shite, they’ll have to think of a new name, or downgrade the others to ‘those devices formerly known as supercapacitors’. But then, I’ll believe this new tech when I see it.

Jacinta: Now now, let’s get on board, superdisruptive technology here we come. Current supercapacitors are called such because they can charge and discharge very quickly over large numbers of cycles, but their storage capacity is limited in comparison to batteries…

Canto: Apparently young Elon Musk predicted some time ago that supercapacitors would provide the next major breakthrough in EVs.

Jacinta: Clever he. But these ultra-high-energy density storage devices, these so-much-more-than-super-supercapacitors, could enable an EV to be charged to a 200 kilometre range in just a few seconds.

Canto: So can you give more detail on the technology?

Jacinta: The development is from a UK technology firm, Augmented Optics, and what I’m reading tells me that it’s all about ‘cross-linked gel electrolytes’ with ultra-high capacitance values which can combine with existing electrodes to create supercapacitors with greater energy storage than existing lithium-ion batteries. So if this technology works out, it will transform not only EVs but mobile devices, and really anything you care to mention, over a range of industries. Though everything I’ve read about this dates back to late last year, or reports on developments from then. Anyway, it’s all about the electrolyte material, which is some kind of highly conductive organic polymer.

Canto: Apparently the first supercapacitors were invented back in 1957. They store energy by means of static charge, and I’m not sure what that means…

Jacinta: We’ll have to do a post on static electricity.

Canto: In any case their energy density hasn’t been competitive with the latest batteries until now.

Jacinta: Yes it’s all been about energy density apparently. That’s one of the main reasons why the infernal combustion engine won out over the electric motor in the early days, and now the energy density race is being run between new-age supercapacitors and batteries.

Canto: So how are supercapacitors used today? I’ve heard that they’re useful in conjunction with regenerative braking, and I’ve also heard that there’s a bus that runs entirely on supercapacitors. How does that work?

Jacinta: Well back in early 2013 Mazda introduced a supercapacitor-based regen braking system in its Mazda 6. To quote more or less from this article by the Society of Automotive Engineers (SAE), kinetic energy from deceleration is converted to electricity by the variable-voltage alternator and transmitted to a supercapacitor, from which it flows through a dc-dc converter to 12-V electrical components.

Canto: Oh right, now I get it…

Jacinta: We’ll have to do posts on alternators, direct current and alternating current. As for your bus story, yes, capabuses, as they’re called, are being used in Shanghai. They use supercapacitors, or ultracapacitors as they’re sometimes called, for onboard power storage, and this usage is likely to spread with the continuous move away from fossil fuels and with developments in supercaps, as I’ve heard them called. Of course, this is a hybrid technology, but I think they’ll be going fully electric soon enough.

Canto: Or not soon enough for a lot of us.

Jacinta: Apparently, with China’s dictators imposing stringent emission standards, electric buses, operating on power lines (we call them trams) became more common. Of course electricity may be generated by coal-fired power stations, and that’s a problem, but this fascinating article looking at the famous Melbourne tram network (run mainly on dirty brown coal) shows that with high occupancy rates the greenhouse footprint per person is way lower than for car users and their passengers. But the capabuses don’t use power lines, though they apparently run on tracks and charge regularly at recharge stops along the way. The technology is being adopted elsewhere too of course.

Canto: So let me return again to basics – what’s the difference between a capacitor and and a super-ultra-whatever-capacitor?

Jacinta: I think the difference is just in the capacitance. I’m inferring that because I’m hearing, on these videos, capacitors being talked about in terms of micro-farads (a farad, remember, being a unit of capacitance), whereas supercapacitors have ‘super capacitance’, i.e more energy storage capability. But I’ve just discovered a neat video which really helps in understanding all this, so I’m going to do a breakdown of it. First, it shows a range of supercapacitors, which look very much like batteries, the largest of which has a capacitance, as shown on the label, of 3000 farads. So, more super than your average capacitor. It also says 2.7 V DC, which I’m sure is also highly relevant. We’re first told that they’re often used in the energy recovery system of vehicles, and that they have a lower energy density (10 to 100 times less than the best Li-ion batteries), but they can deliver 10 to 100 times more power than a Li-ion battery.

Canto: You’ll be explaining that?

Jacinta: Yes, later. Another big difference is in charge-recharge cycles. A good rechargeable battery may manage a thousand charge and recharge cycles, while a supercap can be good for a million. And the narrator even gives a reason, which excites me – it’s because they function by the movement of ions rather than by chemical reactions as batteries do. I’ve seen that in the videos on capacitors, described in our earlier post. A capacitor has to be hooked up to a battery – a power source. So then he uses an analogy to show the difference between power and energy, and I’m hoping it’ll provide me with a long-lasting lightbulb moment. His analogy is a bucket with a hole. The amount of water the bucket can hold – the size of the bucket if you like – equates to the bucket’s energy capacity. The size of the hole determines the amount of power it can release. So with this in mind, a supercar is like a small bucket with a big hole, while a battery is more like a big bucket with a small hole.

Canto: So the key to a supercap is that it can provide a lot of power quickly, by discharging, then it has to be recharged. That might explain their use in those capabuses – I think.

Jacinta: Yes, for regenerative braking, for cordless power tools and for flash cameras, and also for brief peak power supplies. Now I’ve jumped to another video, which inter alia shows how a supercapacitor coin cell is made – I’m quite excited about all this new info I’m assimilating. A parallel plate capacitor is separated by a non-conducting dielectric, and its capacitance is directly proportional to the surface area of the plates and inversely proportional to the distance between them. Its longer life is largely due to the fact that no chemical reaction occurs between the two plates. Supercapacitors have an electrolyte between the plates rather than a dielectric…

Canto: What’s the difference?

Jacinta: A dielectric is an insulating material that causes polarisation in an electric field, but let’s not go into that now. Back to supercapacitors and the first video. It describes one containing two identical carbon-based high surface area electrodes with a paper-based separator between. They’re connected to aluminium current collectors on each side. Between the electrodes, positive and negative ions float in an electrolyte solution. That’s when the cell isn’t charged. In a fully charged cell, the ions attach to the positively and negatively charged electrodes (or terminals) according to the law of attraction. So, our video takes us through the steps of the charge-storage process. First we connect our positive and negative terminals to an energy source. At the negative electrode an electrical field is generated and the electrode becomes negatively charged, attracting positive ions and repelling negative ones. Simultaneously, the opposite is happening at the positive electrode. In each case the ‘counter-ions’ are said to adsorb to the surface of the electrode…

Canto: Adsorption is the adherence of ions – or atoms or molecules – to a surface.

Jacinta: So now there’s a strong electrical field which holds together the electrons from the electrode and the positive ions from the electrolyte. That’s basically where the potential energy is being stored. So now we come to the discharge part, where we remove electrons through the external surface, at the electrode-electrolyte interface we would have an excess of positive ions, therefore a positive ion is repelled in order to return the interface to a state of charge neutrality – that is, the negative charge and the positive charge are balanced. So to summarise from the video, supercapacitors aren’t a substitute for batteries. They’re suited to different applications, applications requiring high power, with moderate to low energy requirements (in cranes and lifts, for example). They can also be used as voltage support for high-energy devices, such as fuel cells and batteries.

Canto: What’s a fuel cell? Will we do a post on that?

Jacinta: Probably. The video mentions that Honda has used a bank of ultra capacitors in their FCX fuel-cell vehicle to protect the fuel cell (whatever that is) from rapid voltage fluctuations. The reliability of supercapacitors makes them particularly useful in applications that are described as maintenance-free, such as space travel and wind turbines. Mazda also uses them to capture waste energy in their i-Eloop energy recovery system as used on the Mazda 6 and the Mazda 3, which sounds like something worth investigating.

References (videos can be accessed from the links above)

http://www.hybridcars.com/supercapacitor-breakthrough-allows-electric-vehicle-charging-in-seconds/

https://en.wikipedia.org/wiki/Supercapacitor

http://www.power-technology.com/features/featureelectric-vehicles-putting-the-super-in-supercapacitor-5714209/

http://articles.sae.org/11845/

https://www.ptua.org.au/myths/tram-emissions/

http://www.europlat.org/capabus-the-finest-advancement-for-electric-buses.htm

Written by stewart henderson

September 5, 2017 at 10:08 am

on the explosion of battery research – part one, some basic electrical concepts, and something about solid state batteries…

leave a comment »

just another type of battery technology not mentioned in this post

Okay I was going to write about gas prices in my next post but I’ve been side-tracked by the subject of batteries. Truth to tell, I’ve become mildly addicted to battery videos. So much seems to be happening in this field that it’s definitely affecting my neurotransmission.

Last post, I gave a brief overview of how lithium ion batteries work in general, and I made mention of the variety of materials used. What I’ve been learning over the past few days is that there’s an explosion of research into these materials as teams around the world compete to develop the next generation of batteries, sometimes called super-batteries just for added exhilaration. The key factors in the hunt for improvements are energy density (more energy for less volume), safety and cost.

To take an example, in this video describing one company’s production of lithium-ion batteries for electric and hybrid vehicles, four elements are mentioned – lithium, for the anode, a metallic oxide for the cathode, a dry solid polymer electrolyte and a metallic current collector. This is confusing. In other videos the current collectors are made from two different metals but there’s no mention of this here. Also in other videos, such as this one, the anode is made from layered graphite and the cathode is made from a lithium-based metallic oxide. More importantly, I was shocked to hear of the electrolyte material as I thought that solid electrolytes were still at the experimental stage. I’m on a steep and jagged learning curve. Fact is, I’ve had a mental block about electricity since high school science classes, and when I watch geeky home-made videos talking of volts, amps and watts I have no trouble thinking of Alessandro Volta, James Watt and André-Marie Ampère, but I have no idea of what these units actually measure. So I’m going to begin by explaining some basic concepts for my own sake.

Amps

Metals are different from other materials in that electrons, those negatively-charged sub-atomic particles that buzz around the nucleus, are able to move between atoms. The best metals in this regard, such as copper, are described as conductors. However, like-charged electrons repel each other so if you apply a force which pushes electrons in a particular direction, they will displace other electrons, creating a near-lightspeed flow which we call an electrical current. An amp is simply a measure of electron flow in a current, 1 ampere being 6.24 x 10¹8 (that’s the power of eighteen) per second. Two amps is twice that, and so on. This useful video provides info on a spectrum of currents, from the tiny ones in our mobile phone antennae to the very powerful ones in bolts of lightning. We use batteries to create this above-mentioned force. Connecting a battery to, say, a copper wire attached to a light bulb causes the current to flow to the bulb – a transfer of energy. Inserting a switch cuts off and reconnects the circuit. Fuses work in a similar way. Fuses are rated at a particular ampage, and if the current is too high, the fuse will melt, breaking the circuit. The battery’s negative electrode, or anode, drives the current, repelling electrons and creating a cascade effect through the wire, though I’m still not sure how that happens (perhaps I’ll find out when I look at voltage or something).

Volts

So, yes, volts are what push electrons around in an electric current. So a voltage source, such as a battery or an adjustable power supply, as in this video, produces a measurable force which applied to a conductor creates a current measurable in amps. The video also points out that voltage can be used as a signal, representing data – a whole other realm of technology. So to understand how voltage does what it does, we need to know what it is. It’s the product of a chemical reaction inside the battery, and it’s defined technically as a difference in electrical potential energy, per unit of charge, between two points. Potential energy is defined as ‘the potential to do work’, and that’s what a battery has. Energy – the ability to do work – is a scientific concept, which we measure in joules. A battery has electrical potential energy, as result of the chemical reactions going on inside it (or the potential chemical reactions? I’m not sure). A unit of charge is called a coulomb. One amp of current is equal to one coulomb of charge flowing per second. This is where it starts to get like electrickery for me, so I’ll quote directly from the video:

When we talk about electrical potential energy per unit of charge, we mean that a certain number of joules of energy are being transferred for every unit of charge that flows.

So apparently, with a 1.5 volt battery (and I note that’s your standard AA and AAA batteries), for every coulomb of charge that flows, 1.5 joules of energy are transferred. That is, 1.5 joules of chemical energy are being converted to electrical potential energy (I’m writing this but I don’t really get it). This is called ‘voltage’. So for every coulomb’s worth of electrons flowing, 1.5 joules of energy are produced and carried to the light bulb (or whatever), in that case producing light and heat. So the key is, one volt equals one joule per coulomb, four volts equals 4 joules per coulomb… Now, it’s a multiplication thing. In the adjustable power supply shown in the video, one volt (or joule per coulomb) produced 1.8 amps of current (1.8 coulombs per second). For every coulomb, a joule of energy is transferred, so in this case 1 x 1.8 joules of energy are being transferred every second. If the voltage is pushed up to two (2 joules per coulomb), it produces around 2 amps of current, so that’s 2 x 2 joules per second. Get it? So a 1.5 volt battery indicates that there’s a difference in electrical potential energy of 1.5 volts between the negative and positive terminals of the battery.

Watts

A watt is a unit of power, and it’s measured in joules per second. One watt equals one joule per second. So in the previous example, if 2 volts of pressure creates 2 amps of current, the result is that four watts of power are produced (voltage x current = power). So to produce a certain quantity of power, you can vary the voltage and the current, as long as the multiplied result is the same. For example, highly efficient LED lighting can draw more power from less voltage, and produces more light per watt (incandescent bulbs waste more energy in heat).

Ohms and Ohm’s law

The flow of electrons, the current, through a wire, may sometimes be too much to power a device safely, so we need a way to control the flow. We use resistors for this. In fact everything, including highly conductive copper, has resistance. The atoms in the copper vibrate slightly, hindering the flow and producing heat. Metals just happen to have less resistance than other materials. Resistance is measured in ohms (Ω). Less than one Ω would be a very low resistance. A mega-ohm (1 million Ω) would mean a very poor conductor. Using resistors with particular resistance values allows you to control the current flow. The mathematical relations between resistance, voltage and current are expressed in Ohm’s law, V = I x R, or R = V/I, or I = V/R (I being the current in amps). Thus, if you have a voltage (V) of 10, and you want to limit the current (I) to 10 milli-amps (10mA, or .01A), you would require a value for R of 1,000Ω. You can, of course, buy resistors of various values if you want to experiment with electrical circuitry, or for other reasons.

That’s enough about electricity in general for now, though I intend to continue to educate myself little by little on this vital subject. Let’s return now to the lithium-ion battery, which has so revolutionised modern technology. Its co-inventor, John Goodenough, in his nineties, has led a team which has apparently produced a new battery that is a great improvement on ole dendrite-ridden lithium-ion shite. These dendrites appear when the Li-ion batteries are charged too quickly. They’re strandy things that make their way through the liquid electrolyte and can cause a short-circuit. Goodenough has been working with Helena Braga, who has developed a solid glass electrolyte which has eliminated the dendrite problem. Further, they’ve replaced or at least modified the lithium metal oxide and the porous carbon electrodes with readily available sodium, and apparently they’re using much the same material for the cathode as the anode, which doesn’t make sense to many experts. Yet apparently it works, due to the use of glass, and only needs to be scaled up by industry, according to Braga. It promises to be cheaper, safer, faster-charging, more temperature-resistant and more energy dense than anything that has gone before. We’ll have to wait a while, though, to see what peer reviewers think, and how industry responds.

Now, I’ve just heard something about super-capacitors, which I suppose I’ll have to follow up on. And I’m betting there’re more surprises lurking in labs around the world…

 

 

Written by stewart henderson

July 29, 2017 at 4:00 pm

how evolution was proved to be true

leave a comment »

The origin of species is a natural phenomenon

Jean-Baptiste Lamarck

The origin of species is an object of inquiry

Charles Darwin

The origin of species is an object of experimental investigation

Hugo de Vries

(quoted in The Gene: an intimate history, by Siddhartha Mukherjee)

Gregor Mendel

I’ve recently read Siddhartha Mukherjee’s monumental book The Gene: an intimate history, a work of literature as well as science, and I don’t know quite where to start with its explorations and insights, but since, as a teacher to international students some of whom come from Arabic countries, I’m occasionally faced with disbelief regarding the Darwin-Wallace theory of natural selection from random variation (usually in some such form as ‘you don’t really believe we come from monkeys do you?’), I think it might be interesting, and useful for me, to trace the connections, in time and ideas, between that theory and the discovery of genes that the theory essentially led to.

One of the problems for Darwin’s theory, as first set down, was how variations could be fixed in subsequent generations. And of course another problem was – how could a variation occur in the first place? How were traits inherited, whether they varied from the parent or not? As Mukherjee points out, heredity needed to be both regular and irregular for the theory to work.

There were few clues in Darwin’s day about inheritance and mutation. Apart from realising that it must have something to do with reproduction, Darwin himself could only half-heartedly suggest an unoriginal notion of blending inheritance, while also leaning at times towards Lamarckian inheritance of acquired characteristics – which he at other times scoffed at.

Mukherjee argues here that Darwin’s weakness was impracticality: he was no experimenter, though a keen observer. The trouble was that no amount of observation, in Darwin’s day, would uncover genes. Even Mendel was unable to do that, at least not in the modern DNA sense. But in any case Darwin lacked Mendel’s experimental genius. Still, he did his best to develop a hypothesis of inheritance, knowing it was crucial to his overall theory. He called it pangenesis. It involved the idea of ‘gemmules’ inhabiting every cell of an organism’s body and somehow shaping the varieties of organs, tissues, bones and the like, and then specimens of these varied gemmules were collected into the germ cells to produce ‘mixed’ offspring, with gemmules from each partner. Darwin describes it rather vaguely in his book The Variation of Animals and Plants under Domestication, published in 1868:

They [the gemmules] are collected from all parts of the system to constitute the sexual elements, and their development in the next generation forms the new being; but they are likewise capable of transmission in a dormant state to future generations and may then be developed.

Darwin himself admitted his hypothesis to be ‘rash and crude’, and it was effectively demolished by a very smart Scotsman, Fleeming Jenkin, who pointed out that a trait would be diluted away by successive unions with those who didn’t have it (Jenkin gave as an example the trait of whiteness, i.e. having ‘white gemmules’, but a better example would be that of blue eyes). With an intermingling of sexual unions, specific traits would be blended over time into a kind of uniform grey, like paint pigments (think of Blue Mink’s hit song ‘Melting Pot’).

Darwin was aware of and much troubled by Jenkin’s critique, but he (and the scientific world) wasn’t aware that a paper published in 1866 had provided the solution – though he came tantalisingly close to that awareness. The paper, ‘Experiments in Plant Hybridisation’, by Gregor Mendel, reported carefully controlled experiments in the breeding of pea plants. First Mendel isolated ‘true-bred’ plants, noting seven true-bred traits, each of which had two variants (smooth or wrinkled seeds; yellow or green seeds; white or violet coloured flowers; flowers at the tip or at the branches; green or yellow pods; smooth or crumpled pods; tall or short plants). These variants of a particular trait are now known as alleles. 

Next, he began a whole series of painstaking experiments in cross-breeding. He wanted to know what would happen if, say, a green-podded plant was crossed with a yellow-podded one, or if a short plant was crossed with a tall one. Would they blend into an intermediate colour or height, or would one dominate? He was well aware that this was a key question for ‘the history of the evolution of organic forms’, as he put it.

He experimented in this way for some eight years, with thousands of crosses and crosses of crosses, and the more the crosses multiplied, the more clearly he found patterns emerging. The first pattern was clear – there was no blending. With each crossing of true-bred variants, only one variant appeared in the offspring – only tall plants, only round peas and so on. Mendel named them as dominant traits, and the non-appearing ones as recessive. This was already a monumental result, blowing away the blending hypothesis, but as always, the discovery raised as many questions as answers. What had happened to the recessive traits, and why were some traits recessive and others dominant?

Further experimentation revealed that disappeared traits could reappear in toto in further cross-breedings. Mendel had to carefully analyse the relations between different recessive and dominant traits as they were cross-bred in order to construct a mathematical model of the different ‘indivisible, independent particles of information’ and their interactions.

Although Mendel was alert to the importance of his work, he was spectacularly unsuccessful in alerting the biological community to this fact, due partly to his obscurity as a researcher, and partly to the underwhelming style of his landmark paper. Meanwhile others were aware of the centrality of inheritance to Darwin’s evolutionary theory. The German embryologist August Weismann added another nail to the coffin of the ‘gemmule’ hypothesis in 1883, a year after Darwin’s death, by showing that mice with surgically removed tails – thus having their ‘tail gemmules’ removed – never produced tail-less offspring. Weismann presented his own hypothesis, that hereditary information was always and only passed down vertically through the germ-line, that’s to say, through sperm and egg cells. But how could this be so? What was the nature of the information passed down, information that could contain stability and change at the same time?

The Dutch botanist Hugo de Vries, inspired by a meeting with Darwin himself not long before the latter’s death, was possessed by these questions and, though Mendel was completely unknown to him, he too looked for the answer through plant hybridisation, though less systematically and without the good fortune of hitting on true-breeding pea plants as his subjects. However, he gradually became aware of the particulate nature of hereditary information, with these particles (he called them ‘pangenes’, in deference to Darwin’s ‘pangenesis’), passing down information intact through the germ-line. Sperm and egg contributed equally, with no blending. He reported his findings in a paper entitled Hereditary monstrosities in 1897, and continued his work, hoping to develop a more detailed picture of the hereditary process. So imagine his surprise when in 1900 a colleague sent de Vries a paper he’d unearthed, written by ‘a certain Mendel’ from the 1860s, which displayed a clearer understanding of the hereditary process than anyone had so far managed. His response was to rush his own most recent work into press without mentioning Mendel. However, two other botanists, both as it happened working with pea hybrids, also stumbled on Mendel’s work at the same time. Thus, in a three-month period in 1900, three leading botanists wrote papers highly indebted to Mendel after more than three decades of profound silence.

Hugo de Vries

The next step of course, was to move beyond Mendel. De Vries, who soon corrected his unfair treatment of his predecessor, sought to answer the question ‘How do variants arise in the first place?’ He soon found the answer, and another solid proof of Darwin’s natural selection. The ‘random variation’ from which nature selected, according to the theory, could be replaced by a term of de Vries’ coinage, ‘mutation’. The Dutchman had collected many thousands of seeds from a wild primrose patch during his country rambles, which he planted in his garden. He identified some some 800 new variants, many of them strikingly original. These random ‘spontaneous mutants’, he realised, could be combined with natural selection to create the engine of evolution, the variety of all living things. And key to this variety wasn’t the living organisms themselves but their units of inheritance, units which either benefitted or handicapped their offspring under particular conditions of nature.

The era of genetics had begun. The tough-minded English biologist William Bateson became transfixed on reading a later paper of de Vries, citing Mendel, and henceforth became ‘Mendel’s bulldog’. In 1905 he coined the word ‘genetics’ for the study of heredity and variation, and successfully promoted that study at his home base, Cambridge. And just as Darwin’s idea of random variation sparked a search for the source of that variation, the idea of genetics and those particles of information known as ‘genes’ led to a worldwide explosion of research and inquiry into the nature of genes and how they worked – chromosomes, haploid and diploid cells, DNA, RNA, gene expression, genomics, the whole damn thing. We now see natural selection operating everywhere we’re prepared to look, as well as the principles of ‘artificial’ or human selection, in almost all the food we eat, the pets we fondle, and the superbugs we try so desperately to contain or eradicate. But of course there’s so much more to learn….

William Bateson

Written by stewart henderson

June 14, 2017 at 5:42 pm

an intro to chemistry for dummies by dummies

leave a comment »

orbitals – one day we may understand

Jacinta: Well, in ‘researching’ – I have to put it in quotes cause what I do is so shallow it barely counts as research – the last piece, I came across a reference to Philip Ball’s choice of the top ten unsolved mysteries in science, at least chemical science.

Canto: Philip Ball, author of Curiosity…

Jacinta: Among other things. His list was published in Scientific American in 2011, the official ‘Year of Chemistry’ – which passed unnoticed by supposedly scientific moi. The actual article is largely unavailable to the impoverished, but at least I’ve been able to access the list here. So I thought we might have fun discussing it in our quest to self-educate autant que possible before we die.

Canto: Yes I don’t know enough about chemistry to say whether this is a bog-standard list or an eccentric one, but there are no quibbles about the first mystery – the origin of life. But have we already covered that?

Jacinta: Not really. Ball’s mystery number 1, to be exact, is ‘How did life begin?’ – by which he presumably means life as we know it. And, as Jack Szostak puts it, the answer lies with ‘chemistry plus details’. Putting the right chemistry together in the right order under the right conditions, which they’ve managed to do in a ‘small way’ in the lab, synthesising a pyrimidine nucleotide, as noted in our last post.

Canto: Yes it seems to me we’re never going to solve this mystery by somehow stumbling upon the first life on Earth, or even a trace of it. How will we ever know it’s the first? Then again creating different kinds of conditions – gases and pressures and molecular bits and pieces – and mixing and shaking and cooking, that may not solve the mystery either, because we’ll never know if it happened like that, but it might show how life can begin, and that would be pretty awesome, if I may use that word correctly for once.

Jacinta: Usage changes mate, live with it. So what’s Ball’s second mystery?

Canto: ‘How do molecules form?’ Now we’re really getting into basic chemistry.

Jacinta: But isn’t that a known known? Bonding isn’t it? Like O² is an oxygen atom bonding with another to create a more stable configuration… I don’t know.

Canto: Well let’s look into it. What exactly is a chemical bond and why do they form? Molecular oxygen is common and stable, but what about ozone, isn’t that just oxygen in a different molecular form, O³? Yet in different molecular form, oxygen has different qualities. Ozone’s a pungent-smelling gas, whereas standard oxygen’s odourless. So why does it have different molecular forms? Why does it have any molecular form, why doesn’t it just exist as single atoms?

Jacinta: But then you could ask why do atoms exist, and why in different configurations of protons and neutrons, etc? Best to stick to how questions.

Canto: Okay, I’d like to know how, under what conditions, oxygen exists as O³ rather than O².

Jacinta: So we have to go to bonding. This occurs between electrons in the ‘outer shell’ of atoms. In molecular oxygen, O2, the two oxygen atoms form a covalent bond, sharing four electrons, two from each atom. The water and carbon dioxide molecules are also covalently bonded. Covalently bonded molecules are usually in liquid or gas form.

Canto: What causes the atoms to form these bonds though?

Jacinta: There are two other types of bonds, ionic and metallic. As to causes, there are simple and increasingly complex explanations. I’m sure Ball was after the most complex and comprehensive explanation possible, which I believe involves quantum mechanics. For a very introductory explanation to the types of bonds, this website is useful, but this much more complex, albeit brief, explanation of the O2 bond in particular will leave you scratching your head. So I think we should do a sort of explication de texte of this response, which comes from organic chemist David Shobe:

If you mean the molecule O2, that is actually a complicated question.  It is a double bond, but not a typical double bond such as in ethylene, CH2=CH2.  In ethylene, each carbon atom has a sigma orbital and a pi orbital for bonding, and there are 4 electrons available (after forming the C-H bonds), so each bonding orbital (sigma and pi) has 2 electrons, which is optimal for bonding.  Also, since each orbital has a pair of electrons, one gets a singlet ground state: all electrons are in pairs.

In O2, there are 1 sigma orbital and 2 pi orbitals for bonding, but 12 valence electrons.  Four electrons, 2 on each oxygen atom, are in lone pairs, away from the bonding area.  This leaves 8 electrons for 3 bonding orbitals.  Since each orbital can only hold 2 electrons, there are 2 electrons forced into antibonding orbitals.  This is just what it sounds like: these electrons count negatively in determining the type of bond (technical term is bond order), so 2 sigma bonding electrons + 4 pi bonding electrons – 2 pi antibonding electrons, divided by 2 since an orbital holds 2 electrons, equals a bond order of 2: a double bond.

However, there are *two* pi antibonding orbitals with the same energy.  As  a result, one electron goes into each pi antibonding orbital.  This results in a triplet ground state: one in which there are two unpaired electrons.

That may be more answer than you wanted, but it’s what chemists believe.

Canto: Wow, a tough but interesting task. So a very good place to start is the beginning. By double bond, does he mean covalent bond?

Jacinta: Well according to this clearly reliable site, ethylene, aka ethene (C2H4) is the simplest alkene, that is an unsaturated (??)  hydrocarbon with double bonds – covalent bonds – between the carbons. So I think the answer to your question is yes… or no, there are triple covalent bonds too.

Canto: Okay so I’d like to know more about what a covalent bond is, and what valence electrons are, and then we need to know more about orbitals – pi and sigma and maybe others.

Jacinta: Well guess what, the more you dive into molecular bonding, the murkier stuff gets – until you familiarise yourself I suppose. There are different types of orbitals which lead to different types of covalent bonds, single, double and triple. The term ‘covalent’ means joint ownership, sharing, partnering, as we know, of valence. So how to describe valence? With great difficulty.

Canto: Just watched a video that tells me that covalent compounds or molecular compounds only exist between non-metallic elements, whereas ionic compounds are made up of non-metallic and metallic elements, and ionic bonds are quite different from covalent bonds. And presumably metallic bonds join only metallic elements. Don’t know if that helps any.

Jacinta: Well yes it does in that it tells us we really need to start from scratch with basic chemistry before we can get a handle on the molecule problem.

Canto: Okay, time to go back to the Khan academy.

Jacinta: Yes and we’ll do so always bearing in mind that fundamental question about the formation of molecules. So our chemistry lesson begins with elements made up of atoms so tiny that, for example, the width of a human hair, which is essentially carbon, can fit a million of them.

Canto: And the elements are distinguished from each other by their atomic numbers, which is the number of protons in their nuclei. They can have different numbers of neutrons, but for example, carbon must always have six protons.

Jacinta: And neutral-charge carbon will have six electrons buzzing about the nucleus, sort of. They keep close to the nucleus because they’re negatively charged, we don’t know why (or at least I don’t), and so they’re attracted to the positively charged protons in the nucleus.

Canto: More fundamental questions. Why are electrons negatively charged? Why are positively charged particles attracted to negatively charged ones? And if they’re so attracted why don’t electrons just fall into the nucleus and kiss their attractive protons, and live in wedded bliss with them?

Jacinta: Let’s stick to how questions for now. Electrons don’t fall into the nucleus but they can be lost to other atoms, in which case the atom will have a positive charge, having more protons than electrons. So with the losing and the stealing and the sharing of electrons between atoms, elements will have changed properties. Remember oxygen and ozone.

Canto: So it’s interesting that, right from the get-go, we’re looking at that ancient philosophical question of the constituents of matter. And though we now know that atoms aren’t indivisible, they do represent the smallest constituents of any particular element.

Jacinta: But as you know, that smallest constituent gets weird and mathematical and quantum mechanical, with electrons being waves or particles or probability distributions, with the probability of finding them or ‘fixing’ them being higher the closer you get to the nucleus. So this mathematical probability function of an electron is what we call its orbital. Remember that word?

Canto: Right, that’s a beginning, and it gives me an inkling into types of orbitals, such as antibonding orbitals. Continue.

Jacinta: We’ll continue next time. We’ve only just entered the darkness before the dawn.

 

http://solarfuel.clas.asu.edu/10-unsolved-mysteries-chemistry

https://www.factmonster.com/dk/encyclopedia/science/molecules

https://www.quora.com/What-type-of-bond-do-2-oxygen-atoms-have

https://chem.libretexts.org/Core/Organic_Chemistry/Alkenes/Properties_of_Alkenes/Structure_and_Bonding_in_Ethene-The_Pi_Bond

https://en.wikipedia.org/wiki/Unsaturated_hydrocarbon

Written by stewart henderson

May 23, 2017 at 1:27 am

the strange world of the self-described ‘open-minded’ part two

leave a comment »

  • That such a huge number of people could seriously believe that the Moon landings were faked by a NASA conspiracy raises interesting questions – maybe more about how people think than anything about the Moon landings themselves. But still, the most obvious question is the matter of evidence. 

Philip Plait,  from ‘Appalled at Apollo’, Chapter 17 of Bad Astronomy

the shadows of astronauts Dave Scott and Jim Irwin on the Moon during the 1971 Apollo 15 mission - with thanks to NASA, which recently made thousands of Apollo photos available to the public through Flickr

the shadows of astronauts Dave Scott and Jim Irwin on the Moon during the 1971 Apollo 15 mission – with thanks to NASA, which recently made thousands of Apollo photos available to the public through Flickr

So as I wrote in part one of this article, I remember well the day of the first Moon landing. I had just turned 13, and our school, presumably along with most others, was given a half-day off to watch it. At the time I was even more amazed that I was watching the event as it happened on TV, so I’m going to start this post by exploring how this was achieved, though I’m not sure that this was part of the conspiracy theorists’ ‘issues’ about the missions. There’s a good explanation of the 1969 telecast here, but I’ll try to put it in my own words, to get my own head around it.

I also remember being confused at the time, as I watched Armstrong making his painfully slow descent down the small ladder from the lunar module, that he was being recorded doing so, sort of side-on (don’t trust my memory!), as if someone was already there on the Moon’s surface waiting for him. I knew of course that Aldrin was accompanying him, but if Aldrin had descended first, why all this drama about ‘one small step…’? – it seemed a bit anti-climactic. What I didn’t know was that the whole thing had been painstakingly planned, and that the camera recording Armstrong was lowered mechanically, operated by Armstrong himself. Wade Schmaltz gives the low-down on Quora:

The TV camera recording Neil’s first small step was mounted in the LEM [Lunar Excursion Module, aka Lunar Module]. Neil released it from its cocoon by pulling a cable to open a trap door prior to exiting the LEM that first time down the ladder.

Neil Armstrong, touching down on the Moon -an image I'll never forget

Neil Armstrong, touching down on the Moon – an image I’ll never forget

 

the camera used to capture Neil Armstrong's descent

the camera used to capture Neil Armstrong’s descent

As for the telecast, Australia played a large role. Here my information comes from Space Exploration Stack Exchange, a Q and A site for specialists as well as amateur space flight enthusiasts.

Australia was one of three continents involved in the transmissions, but it was the most essential. Australia had two tracking stations, one near Canberra and the other at the Parkes Radio Observatory west of Sydney. The others were in the Mojave Desert, California, and in Madrid, Spain. The tracking stations in Australia had a direct line on Apollo’s signal. My source quotes directly from NASA:

The 200-foot-diameter radio dish at the Parkes facility managed to withstand freak 70 mph gusts of wind and successfully captured the footage, which was converted and relayed to Houston.

iclez

Needless to say, the depictions of Canberra and Sydney aren’t geographically accurate here!

And it really was pretty much ‘as it happened’, the delay being less than a minute. The Moon is only about a light-second away, but there were other small delays in relaying the signal to TV networks for us all to see.

So now to the missions and the hoax conspiracy. But really, I won’t be dealing with the hoax stuff directly, because frankly it’s boring. I want to write about the good stuff. Most of the following comes from the ever-more reliable Wikipedia – available to all!

The ‘space race’ between the Soviet Union and the USA can be dated quite precisely. It began in July 1956, when the USA announced plans to launch a satellite – a craft that would orbit the Earth. Two days later, the Soviet Union announced identical plans, and was able to carry them out a little over a year later. The world was stunned when Sputnik 1 was launched on October 4 1957. Only a month later, Laika the Muscovite street-dog was sent into orbit in Sputnik 2 – a certain-death mission. The USA got its first satellite, Explorer 1, into orbit at the end of January 1958, and later that year the National Aeronautics and Space Administraion (NASA) was established under Eisenhower to encourage peaceful civilian developments in space science and technology. However the Soviet Union retained the initiative, launching its Luna program in late 1958, with the specific purpose of studying the Moon. The whole program, which lasted until 1976, cost some $4.5 billion and its many failures were, unsurprisingly, shrouded in secrecy. The first three Luna rockets, intended to land, or crash, on the Moon’s surface, failed on launch, and the fourth, later known as Luna 1, was given the wrong trajectory and sailed past the Moon, becoming the first human-made satellite to take up an independent heliocentric orbit. That was in early January 1959 – so the space race, with its focus on the Moon, began much earlier than many people realise, and though so much of it was about macho one-upmanship, important technological developments resulted, and vital observations were made, including measurements of energetic particles in the outer Van Allen belt. Luna 1 was the first spaceship to achieve escape velocity, the principle barrier to landing a vessel on the Moon.

After another launch failure in June 1959, the Soviets successfully launched the rocket later known as Luna 2 in September that year. Its crash landing on the Moon was a great success, which the ‘communist’ leader Khrushchev was quick to ‘capitalise’ on during his only visit to the USA immediately after the mission. He handed Eisenhower replicas of the pennants left on the Moon by Luna 2. And there’s no doubt this was an important event, the first planned impact of a human-built craft on an extra-terrestrial object, almost 10 years before the Apollo 11 landing.

The Luna 2 success was immediately followed only a month later by the tiny probe Luna 3‘s flyby of the far side of the Moon, which provided the first-ever pictures of its more mountainous terrain. However, these two missions formed the apex of the Luna enterprise, which experienced a number of years of failure until the mid-sixties. International espionage perhaps? I note that James Bond began his activities around this time.

the Luna 3 space probe (or is it H G Wells' time machine?)

the Luna 3 space probe (or is it H G Wells’ time machine?)

The Luna Program wasn’t the only only one being financed by the Soviets at the time, and the Americans were also developing programs. Six months after Laika’s flight, the Soviets successfully launched Sputnik 3, the fourth successful satellite after Sputnik 1 & 2 and Explorer 1. The important point to be made here is that the space race, with all its ingenious technical developments, began years before the famous Vostok 1 flight that carried a human being, Yuri Gagarin, into space for the first time, so the idea that the technology wasn’t sufficiently advanced for a moon landing many years later becomes increasingly doubtful.

Of course the successful Vostok flight in April 1961 was another public relations coup for the Soviets, and it doubtless prompted Kennedy’s speech to the US Congress a month later, in which he proposed that “this nation should commit itself to achieving the goal, before this decade is out, of landing a man on the Moon and returning him safely to the Earth.”

So from here on in I’ll focus solely on the USA’s moon exploration program. It really began with the Ranger missions, which were conceived (well before Kennedy’s speech and Gagarin’s flight) in three phases or ‘blocks’, each with different objectives and with increasingly sophisticated system design. However, as with the Luna missions, these met with many failures and setbacks. Ranger 1 and Ranger 2 failed on launch in the second half of 1961, and Ranger 3, the first ‘block 2 rocket’, launched in late January 1962, missed the Moon due to various malfunctions, and became the second human craft to take up a heliocentric orbit. The plan had been to ‘rough-land’ on the Moon, emulating Luna 2 but with a more sophisticated system of retrorockets to cushion the landing somewhat. The Wikipedia article on this and other missions provides far more detail than I can provide here, but the intensive development of new flight design features, as well as the use of solar cell technology, advanced telemetry and communications systems and the like really makes clear to me that both competitors in the space race were well on their way to having the right stuff for a manned moon landing.

I haven’t even started on the Apollo missions, and I try to give myself a 1500-word or so limit on posts, so I’ll have to write a part 3! Comment excitant!

The Ranger 4 spacecraft was more or less identical in design to Ranger 3, with the same impact-limiter – made of balsa wood! – atop the lunar capsule. Ranger 4 went through preliminary testing with flying colours, the first of the Rangers to do so. However the mission itself was a disaster, as the on-board computer failed, and no useful data was returned and none of the preprogrammed actions, such as solar power deployment and high-gain antenna utilisation, took place. Ranger 4 finally impacted the far side of the Moon on 26 April 1962, becoming the first US craft to land on another celestial body. Ranger 5 was launched in October 1962 at a time when NASA was under pressure due to the many failures and technical problems, not only with the Ranger missions, but with the Mariner missions, Mariner 1 (designed for a flyby mission to Venus) having been a conspicuous disaster. Unfortunately Ranger 5 didn’t improve matters, with a series of on-board and on-ground malfunctions. The craft missed the Moon by a mere 700 kilometres. Ranger 6, launched well over a year later, was another conspicuous failure, as its sole mission was to send high-quality photos of the Moon’s surface before impact. Impact occurred, and overall the flight was the smoothest one yet, but the camera system failed completely.

There were three more Ranger missions. Ranger 7, launched in July 1964, was the first completely successful mission of the series. Its mission was the same as that of Ranger 6, but this time over 4,300 photos were transmitted during the final 17 minutes of flight. These photos were subjected to much scrutiny and discussion, in terms of the feasibility of a soft landing, and the general consensus was that some areas looked suitable, though the actual hardness of the surface couldn’t be determined for sure. Miraculously enough, Ranger 8, launched in February 1965, was also completely successful. Again its sole mission was to photograph the Moon’s surface, as NASA was beginning to ready itself for the Apollo missions. Over 7,000 good quality photos were transmitted in the final 23 minutes of flight. The overall performance of the spacecraft was hailed as ‘excellent’, and its impact crater was photographed two years later by Lunar Orbiter 4. And finally Ranger 9 made it three successes in a row, and this time the camera’s 6,000 images were broadcast live to viewers across the United States. The date was March 24, 1965. The next step would be that giant one.

A Ranger 9 image showing rilles - long narrow depressions - on the Moon's surface

A Ranger 9 image showing rilles – long narrow depressions – on the Moon’s surface