Monthly Archives: July 2012

Understanding ‘Deep Time’

One of the most challenging things I’ve tried to explain in my science communication career is the concept of ‘deep time’, or the geological timeline of the earth. While it is crucial to understanding the history of the Earth and life as we know it, the vast span of time involved can be an incredibly difficult concept for young people to grasp – and rightly so. When an 8-week summer term can feel like an idyllic lifetime, 4 billion plus years is mind-blowingly huge. However, having a strong conceptualization of scale and the relationships between scales is essential in being a scientifically literate consumer of information (Tretter, et al., 2006), so how do you explain geological time in a way that young people can grasp?

An important engagement tool is helping people to relate to the facts you’re telling them, making them invested in learning more rather than counting on them to want to gain knowledge just for knowledge’s sake. Many educators use the hook of human existence when explaining geological time, which not only interests the audience but gives an excellent reference to the vastness of scale with which you are dealing.

You could try to calculate it out in numbers: The history of life on Earth began about 3.8 billion years ago, initially with single-celled prokaryotic cells, such as bacteria. Multicellular life evolved over a billion years later and it’s only in the last 570 million years that the kind of life forms we are familiar with began to evolve, starting with arthropods, followed by fish 530 million years ago (Ma), land plants 475Ma and forests 385Ma. Mammals didn’t evolve until 200Ma and our own species, Homo sapiens, only 200,000 years ago. So humans have been around for a mere 0.004% of the Earth’s history. (

Many people turn to analogies, those things which are like… other useful things. For example:

The 24-hour clock: Did you know…that if we compress the Earth’s 3.7 billion years existence to a 24-hour time scale, the first human species appeared about 47-94 seconds before midnight, and our species (Homo sapiens) appeared roughly 2 seconds before midnight? (

Clocks are actually a very common visualisation tool:

The book of the world: If you were to write a history of the Earth’s past, allowing just one page per year, your book would be 4,600,000,000 pages long. That’s a very thick book — 145 miles to be exact. An average reader, reading about 1 page every 2 minutes would need more than 17,503 years to finish it. And that’s with no time out for anything else — no time to eat, sleep, ride a bike, or go to school. Even if you were an amazing speed reader and could read 2 pages every second, it would still take you nearly 73 years to read the entire book. (

You can even get creative with a roll of toilet paper:

One of my favourite ways to explain just how long the earth’s timeline stretches is to hold out my arm and ask my audience to imagine the entirety of the earth’s history is represented along its length. It’s billions upon billions of years old, and things changed verrrrry slowly. The first life – bacteria – started forming here, around my bicep, and oxygen didn’t even start building up in the atmosphere until around here, my elbow. After that things start developing a bit more quickly, with blue-green algae appearing here, eukaryotic cells here, and down by my wrist the first multicellular organisms finally enter the story. In just the short span of my hand most of the life we know evolves and flourishes, and eventually – at the very end – human beings appear. How long have they been around, in the grand scheme of things? No more than the sliver of my fingernail at the end of my middle finger. So while we like to think that humans are the dominant species, superior to all others, in the grand scheme of things we’re newcomers on this planet – ones who could be erased with the flick of a nail file!

And just in case you were wondering how things stacked up in a galaxy far, far away:

Electronics: The Bigger Picture

In our exploration of electronics, we started at the atomic level with the fundamental properties of subatomic particles. We looked at emergent properties of collections of atoms, like the origins of chemical bonding and electronic behavior of materials. Recently we have started to move up in scale, seeing that individual circuit components affect the flow and storage of electrons in different ways. At this point I think it is worthwhile to take a step back and look at the larger picture. While individual electrons are governed by local interactions that minimize energy, we can figure out global rules for a circuit component that tell us how collections of electrons are affected by a resistor or some other building block, creating the macroscopic quantity we call current. From there we can create collections of circuit components that perform various operations on the current passing through them. These operations can again be combined, and where we may have started with a simple switch, we can end up with a computer or a display or a control circuit.

One way to picture it is like a complex canal system for water: we have a resource whose movement we want to manipulate, to extract physical work and perhaps perform calculations. At a small scale, we can inject dye into a bit of water and watch its progress through the system as it responds to local forces. But we can look at water currents at a larger scale by adding up the behavior of many small amounts of water. In fact, scale is a type of context, a lens through which a system can look quite different! Electrical engineers who design complex circuits for a living tend to work at a much higher level of abstraction than do scientists working on experimental electronic devices. The electrical engineers have to be able to imagine and simulate the function of impressive numbers of transistors, resistors, and other components, as shown below. Whereas a device physicist focuses on the detailed physics in a single circuit component, to learn what its best use might be. They are each working with the same system, but in different and complementary ways.

When I first started writing here, I talked about science as a lens through which we can view the world: a set of perspectives that let us see the things around us in a different way than we are used to. But there are lots of different worldviews and perspectives within science, depending on scale as well as other contexts. A discussion of electrical current, for example, could be handled quite differently depending on whether electrons are moving through a polar solvent like water, or synapses in the brain, or a metal wire connecting a capacitor to an inductor. Scientists who have trained in different fields like physics, chemistry, or biology can imagine very different contexts for discussions of the same phenomenon, so that even when the fundamental science is the same, the narrative and implications may change between contexts.

But in the end, whether you are a scientist or just interested in science, it helps to know not only that an electron is a tiny charged particle, but also how it behaves in electronic circuits, in chemical bonds between atoms, and in biological systems. And to know that it’s possible to build computers out of gears, billiard balls, or even crabs! But the size and properties of electronic computers have led them to dominate, at least for now.


How Differential Gear Works

An explanation of how differential gear works in a straightfoward and clear video. Skip to the end – does what they’re talking about make sense? Now watch from the beginning (start from 1:50 if you want to skip the intro) and see them build up from basic principles until they’ve reached the same point. Does it make sense now?

Contrast this with a written explanation:

A differential is a device, usually, but not necessarily, employing gears, which is connected to the outside world by three shafts, through which it transmits torque and rotation. The gears or other components make the three shafts rotate in such a way that a=pb+qc, where  ab, and  c are the angular velocities of the three shafts, and  p and  q are constants. Often, but not always,  p and  q are equal, so  a is proportional to the sum (or average) of  b and c. Except in some special-purpose differentials, there are no other limitations on the rotational speeds of the shafts. Any of the shafts can be used to input rotation, and the other(s) to output it.

In automobiles and other wheeled vehicles, a differential allows the driving roadwheels to rotate at different speeds. This is necessary when the vehicle turns, making the wheel that is travelling around the outside of the turning curve roll farther and faster than the other. The engine is connected to the shaft rotating at angular velocity  a. The driving wheels are connected to the other two shafts, and  p and  q are equal. If the engine is running at a constant speed, the rotational speed of each driving wheel can vary, but the sum (or average) of the two wheels’ speeds can not change. An increase in the speed of one wheel must be balanced by an equal decrease in the speed of the other.

(taken from Wikipedia)

While this is almost certainly an accurate description of the forces behind the device, it’s nowhere near as easily understood as a visual representation. It’s important to understand when visuals can help your explanation and if you do use them it’s extremely important to make sure it’s clear and accessible.

And lest you think these are newfangled concepts – this video is from the 1930’s!

The Many Roads from P-N Junctions to Transistors

When I called p-n junctions the building blocks of digital electronics, I was referring to their key role in building transistors. A transistor is another circuit element, but it is active, meaning it can add energy to a circuit, instead of passive like resistors, capacitors, or inductors which only store or dissipate charge. The transistor has an input where current enters the device and an output where current leaves, but also has a control electrode which can be used to modify the transistor’s function. Transistors can act as a switch, an amplifier, and can change the gain of a circuit (i.e. how many electrons come out compared to how many went in). So where did the transistor come from, and how do you build one?

The earliest devices which acted as transistors were called ‘triodes’, for their three inputs, and were made using vacuum tubes. A current could be transmitted from one electrode to the other, across the airless vacuum inside the tube. But applying a voltage to the third electrode induces an electric field which diverts the current, meaning that the third electrode can be used as a switch to turn current on and off. Triodes were in wide use for the first half of the twentieth century, and enabled many radio and telephone innovations, and in fact are still used in some specialty applications that require very high voltages. But they are quite fragile and consume a lot of power, which is part of what pushed researchers to find alternate ways to build a transistor.

Recall that the p-n junction acts as a diode, passing current in one direction but not the other. Two p-n junctions back to back, which could be n-p-n or p-n-p, will pass no current in any direction, because one of the junctions will always block the flow of charge. However, applying a voltage to the point where the p-n junctions are connected modifies the electric field, allowing current to pass. This kind of device is called a bipolar junction transistor (or BJT), because the p-n junction diodes respond differently to positive voltage than to negative voltage which means they are sensitive to the polarity of the current. (Remember all those times in Star Trek that they tried reversing the polarity? Maybe they had some diodes in backward!) The input of a bipolar junction transistor is called the collector, the output is called the emitter, and the region where voltage is applied to switch the device on is called the base. These are drawn as C, E, and B in the schematic shown below.

Bipolar Junction Transistor

Looking at the geometry of a bipolar junction transistor, you might notice that without the base region, the device is just a block of doped semiconductor which would be able to conduct current. What if there were a way to insert or remove a differently doped region to create junctions as needed? This can be done with a slightly different geometry, as shown below with the input now marked S for source, the output marked D for drain, and the control electrode marked G for gate. Applying a voltage to the gate electrode widens the depletion region at the p-n interface, which pinches off current by reducing the cross-section of p-type semiconductor available for conduction. This is effectively a p-n-p junction where the interfaces can be moved by adjusting the depletion region. Since it’s the electric field due to the gate that makes the channel wider or narrower, this device is called a junction field-effect transistor, or JFET.

Junction Field Effect Transistor

Both types of junction transistor were in widespread use in electronics from about 1945-1975. But another kind of transistor has since leapt to prominence. Inverting the logic that lead us to the junction field effect transistor, we can imagine a device geometry where an electric field applied by a gate actually creates the conducting region in a semiconductor, as in the schematic below. This device is called a metal-oxide-semiconductor field-effect transistor (or MOSFET), because the metal gate electrode is separated from the semiconductor channel by a thin oxide layer. Using the oxide as an insulator is pretty clever, because interfaces between silicon and its native oxide have very few places for electrons to get stuck, compared to the interfaces between silicon and other insulating materials. This means that the whole device, with oxide, p-type silicon, and n-type silicon, can be made in a silicon fabrication facility, many of which had already been built in the first few decades of the electronics era.

These two advantages over junction transistors gave MOSFETs a definite edge, but one final development has cemented their dominance. The combination of an n-channel MOSFET and a p-channel MOSFET together enable the creation of an extremely useful set of basic circuits. Devices built using pairs of one n-channel and one p-channel MOSFET working together are called CMOS, as shorthand for complementary metal-oxide-semiconductor, and have both lower power consumption and increased noise tolerance when compared to junction transistors. You might be asking, what are these super important circuits that CMOS is the best way of building? They are the circuits for digital logic, which we will devote a post to shortly!



“If you take a look back through history, science and maths evolved to solve mysteries”.

What a great way to highlight the impacts science, technology, engineering and maths have had on the world.

Particles, Field Theory, and the Higgs Boson

The buzz around the discovery of the Higgs boson last week induced Erin to challenge me to explain what it is! Well, I’m not a particle physicist, but I do like talking about science, so here goes.

It’s my opinion that the easiest way to understand the Higgs boson is by starting from forces and fields. In day to day life, there are two broad sorts of forces that we are used to encountering. There are forces that come from expending energy to create mechanical action, like pushing a door or throwing a ball. But there are also forces that arise due to intrinsic fields, such as gravitational force on an apple falling or magnetic force on a compass. These fields provide a way to quantify the fact that at every point in space, there are gravitational and electromagnetic and other forces coming from other near or not so near objects. If a new object is introduced to a point in space, it feels forces due to those fields. One way to think about it is that the fields transmit forces between objects, like the gravitational field which transmits forces between the Earth and the Sun. Thus, there is no true vacuum, in part because there is a measurable  gravitational field. Quantum field theory takes things a step further and describes everything in terms of fields. Then, what we have been calling ‘particles’ are special mathematical solutions to the field equations, like oscillations of the underlying field.

But when thinking about fields providing force, a very good question to ask is: what’s the physical mechanism for that? When I push on a door, I generate movement by activating muscles, which turns one chemical into another, turning energy that was stored in chemical bonds into a mechanical form of energy. My hand transfers that energy to the door, via the interface between door and hand at the spot where I’m pushing, and then the door moves. So if there is really a magnetic field pushing a compass needle, how is energy transferred to the needle in order for it to move?

The current thinking in particle physics is that each of these fields has a corresponding particle that transfers the forces from that field. So for an electromagnetic interaction between two particles, there is actually a third particle type that is being exchanged, which is what conveys energy from the field. Usually these mediating particles are ‘virtual’, meaning they exist over very short distances but can have high energies (the requirement of short lifetime for high energy comes from the uncertainty principle between energy and time). In the Standard Model of particle physics, these mediators are called gauge bosons. For example, the electromagnetic field is mediated by the photon, which you may know as the quantum of light. There are special forces that are only noticeable on very short length scales, such as those in the nucleus of atoms. These are the nuclear strong and weak forces, mediated by gluons and W and Z bosons. An additional gauge boson for the force of gravity, called the graviton, has been theorized but not yet detected.

The Standard Model of particle interactions was intended as a framework for unifying the electromagnetic, strong, and weak forces, meaning that it had to account for the properties of gauge bosons. Accounting for the mass of these gauge bosons has been complex: the photon is massless, but the W and Z bosons have a significant mass. But in early formulations of the Standard Model, all particles were treated as massless, and it was a big issue to find a way to fit non-zero particle mass into the picture. The Higgs field is a means to that end, dreamt up by the theoretical physicist Peter Higgs in 1964. It is another field which interacts with particles and contributes to the forces that they experience, this time by giving them mass. The Higgs field is often described as a field which slows some particles down as if they were moving through treacle, but Higgs himself has mentioned his dislike of the idea that the particles experience drag or turbulence due to the Higgs field. The analogy with drag and turbulence implies that energy is being dissipated, but the Higgs field affects particles without reducing their energy. Higgs proposes thinking of it as similar to the refraction of light as it enters a medium like water. As the properties of light are changed by moving through water, so the properties of the particle are being changed by interaction with the Higgs field. How particles interact with the Higgs field determines their mass.

If the Higgs field is real, the corresponding gauge boson must exist. And, a new boson with many of the expected properties of the Higgs boson has now been found in two corroborating experiments at CERN, a particle physics laboratory in Geneva.

There are many different theoretical ways to add the Higgs field into the Standard Model. But most of them predict a fairly high energy for the Higgs boson, and so as accelerator energies rose progressively over the decades and the Higgs boson was not found, Higgs bosons of low mass were eliminated as possibilities. The Large Hadron Collider, which went online in 2008, was expected to have an energy range capable of either finding a Higgs boson that fit with one version of the Standard Model, or proving that the Standard Model contained a serious error. The newly discovered Higgs boson seems to fit the Standard Model, though more work is needed to figure out which version of the Standard Model was correct.

There are still lots of questions to be answered regarding the Standard Model, though. The Standard Model does not incorporate gravity or relativity. It doesn’t explain why the range of mass values for other fundamental particles is huge and kind of arbitrary, as opposed to the few fixed values for charge that fundamental particles can have. It also doesn’t explain why there is more matter than anti-matter (sometimes called CP-violation), or what dark matter or dark energy (both of which have been observed by astronomers) might be. So the discovery of the Higgs boson is definitely a triumph for the particle physics community, but there are still plenty of discoveries to be made!

How to describe the Higgs boson to a seven-year-old

Just a quick one – following on from announcements from CERN regarding the probable discovery of the Higgs boson particle we’ll be seeing plenty of attempts to explain exactly what it is in layperson’s terms. Maybe our very own Jessamyn will give it a try?