Tag Archives: electrons

The Many Roads from P-N Junctions to Transistors

When I called p-n junctions the building blocks of digital electronics, I was referring to their key role in building transistors. A transistor is another circuit element, but it is active, meaning it can add energy to a circuit, instead of passive like resistors, capacitors, or inductors which only store or dissipate charge. The transistor has an input where current enters the device and an output where current leaves, but also has a control electrode which can be used to modify the transistor’s function. Transistors can act as a switch, an amplifier, and can change the gain of a circuit (i.e. how many electrons come out compared to how many went in). So where did the transistor come from, and how do you build one?

The earliest devices which acted as transistors were called ‘triodes’, for their three inputs, and were made using vacuum tubes. A current could be transmitted from one electrode to the other, across the airless vacuum inside the tube. But applying a voltage to the third electrode induces an electric field which diverts the current, meaning that the third electrode can be used as a switch to turn current on and off. Triodes were in wide use for the first half of the twentieth century, and enabled many radio and telephone innovations, and in fact are still used in some specialty applications that require very high voltages. But they are quite fragile and consume a lot of power, which is part of what pushed researchers to find alternate ways to build a transistor.

Recall that the p-n junction acts as a diode, passing current in one direction but not the other. Two p-n junctions back to back, which could be n-p-n or p-n-p, will pass no current in any direction, because one of the junctions will always block the flow of charge. However, applying a voltage to the point where the p-n junctions are connected modifies the electric field, allowing current to pass. This kind of device is called a bipolar junction transistor (or BJT), because the p-n junction diodes respond differently to positive voltage than to negative voltage which means they are sensitive to the polarity of the current. (Remember all those times in Star Trek that they tried reversing the polarity? Maybe they had some diodes in backward!) The input of a bipolar junction transistor is called the collector, the output is called the emitter, and the region where voltage is applied to switch the device on is called the base. These are drawn as C, E, and B in the schematic shown below.

Bipolar Junction Transistor

Looking at the geometry of a bipolar junction transistor, you might notice that without the base region, the device is just a block of doped semiconductor which would be able to conduct current. What if there were a way to insert or remove a differently doped region to create junctions as needed? This can be done with a slightly different geometry, as shown below with the input now marked S for source, the output marked D for drain, and the control electrode marked G for gate. Applying a voltage to the gate electrode widens the depletion region at the p-n interface, which pinches off current by reducing the cross-section of p-type semiconductor available for conduction. This is effectively a p-n-p junction where the interfaces can be moved by adjusting the depletion region. Since it’s the electric field due to the gate that makes the channel wider or narrower, this device is called a junction field-effect transistor, or JFET.

Junction Field Effect Transistor

Both types of junction transistor were in widespread use in electronics from about 1945-1975. But another kind of transistor has since leapt to prominence. Inverting the logic that lead us to the junction field effect transistor, we can imagine a device geometry where an electric field applied by a gate actually creates the conducting region in a semiconductor, as in the schematic below. This device is called a metal-oxide-semiconductor field-effect transistor (or MOSFET), because the metal gate electrode is separated from the semiconductor channel by a thin oxide layer. Using the oxide as an insulator is pretty clever, because interfaces between silicon and its native oxide have very few places for electrons to get stuck, compared to the interfaces between silicon and other insulating materials. This means that the whole device, with oxide, p-type silicon, and n-type silicon, can be made in a silicon fabrication facility, many of which had already been built in the first few decades of the electronics era.

These two advantages over junction transistors gave MOSFETs a definite edge, but one final development has cemented their dominance. The combination of an n-channel MOSFET and a p-channel MOSFET together enable the creation of an extremely useful set of basic circuits. Devices built using pairs of one n-channel and one p-channel MOSFET working together are called CMOS, as shorthand for complementary metal-oxide-semiconductor, and have both lower power consumption and increased noise tolerance when compared to junction transistors. You might be asking, what are these super important circuits that CMOS is the best way of building? They are the circuits for digital logic, which we will devote a post to shortly!

P-N Junctions: Building Blocks of Digital Electronics

Last time we mentioned n-type semiconductors, which have electrons as the charge carrier, and p-type semiconductors, which have holes as the charge carrier. But what happens when you put them together? The result is a device which underlies much of modern electronics!

Imagine a block of an n-type semiconductor pressed up to a block of a p-type semiconductor. What happens when the two make contact? Each material is electrically neutral, but the holes in the p-type material provide additional electron states which draw electrons from the n-type material. And when electrons travel into the p-type material, they leave holes behind them in the n-type material. Thus, at the interface the p-type semiconductor gains a negative charge, and the n-type semiconductor gains a positive charge. The region where this happens is called the space charge region. Even though this isn’t the lowest energy charge distribution, it is the lowest energy overall because it makes use of more available states for both electrons and holes. (The rigorous version of this argument involves entropy, which I hope to dedicate a post to very soon!) So now the interface looks like a cluster of positive charge next to a cluster of negative charge, which creates an electric field across the junction!

If we want to pass current through the p-n junction, that electric field is going to either help us or hurt us depending on which direction we want the current to flow. If the current flow is in the same direction as the force applied by the field, then current will be aided by the presence of the junction. But if the current flow is in the opposite direction, it will be impeded. This kind of device is called a diode, because it can either conduct current or block it depending on the direction of current flow.

Of course, since the size of the electric field in the junction is limited by the interface size and the charge carrier concentration, if we apply a strong enough external electric field then we will be able to pass current through the device in both directions. This is called breakdown, and while ideal diodes are assumed to never exhibit breakdown, real diodes do because of the physical nature of the system which in p-n junctions means the finite size of the junction’s electric field. So a p-n junction acts like a normal semiconductor with applied voltage in one direction, and in the other direction passes no current until a high enough voltage is reached to induce breakdown.

Diodes are a building block that can be used to make a more complex electronic circuit, just like inductors, resistors, and capacitors. But p-n junctions specifically are the building blocks of most digital circuits in silicon! More on that coming soon!

Electrons and Holes

So far when we’ve talked about electronic properties of materials, we have emphasized electrons as the carriers of charge through the material. As we know, in atoms, nuclei are big and mostly immobile, whereas electrons are small and exist in a probability cloud around the nuclei. Thus the mobility and number of electrons, plus the available energy states, are what determine how easily electrons can flow through a material. And this decides whether something is an insulator, a metal, or most interestingly a semiconductor.

But consider a material that has many, many electrons, one in which the band of electron states is nearly full with only a few vacancies. Even with an applied electric field, very few electrons will be able to go anywhere if there are not many available states to move into. A material like this would be nearly an insulator. But we may see one electron move over into an empty state, then a second electron move into the state vacated by the first, then a third electron move into the state vacated into the second, and so on. The motion of the electrons is causing a net charge flow, but no individual electron is able to get very far because of the dearth of available states. From a distance it might almost appear as if the empty space without an electron is what’s moving.

This is similar to a very common occurrence, in fizzy beverages such as soda or beer. Bubbles form, and once they detach from the sides of the container, they rise up through the liquid. But the force causing this motion is gravity, which doesn’t affect the gas in the bubbles as much as it affects the relatively dense liquid around them. In order for the liquid to fall, the bubble must rise. Or, imagine a row of seats, with a middle seat unoccupied but all the other seats full, in a narrow space that makes it difficult to get past occupied seats. A person next to the empty seat could move over, and then the person next to them can move over, and so on. It is the people that are doing the moving, but if we wanted to describe the motion it would almost be simpler to say that the empty seat moves to the edge of the row. There are lots of other examples of the same phenomenon, shown in the diagram below using marbles.

Thus, for the materials whose electron states are crowded but not quite full, the empty states are called ‘electron holes’ or just ‘holes’. Holes are quasiparticles, meaning we can treat them as individual particles even though they are really a collection of behavior exhibited by many particles. Conduction of charge still occurs via the movement of electrons, but conceptually and mathematically it is easier to describe the movement of holes in the system. So one can calculate the charge of a hole, which is the opposite of the charge of an electron, or the mass of a hole in various materials, or the hole mobility which describes how easy it is for a hole to traverse any given material. A material with holes as the charge carrier is called p-type, and a material with electrons as the charge carrier is called n-type,  because of the positive and negative charges of holes and electrons.

Practically, this is an important distinction between different types of semiconductor, and you’ll see how it comes into play in technology when we talk about p-n junctions and finally get to the transistor. But conceptually, I find it really cool that the emergent behavior of a bunch of electrons can be described as a quasiparticle, with its own mass, charge, and electronic properties. It’s elegant and weird, as nature often is.

I’m With The Band Theory

We have already talked about the mechanical aspects of how a solid is assembled from individual atoms, when the atoms are in a repeating periodic structure called a crystal lattice. The lattice type can determine many of the material properties, but one of the most interesting and useful properties to think about is electrical conduction. One of the simplest ways to measure conduction is by creating a potential difference, so that one side of the material is more energetically favorable to charged particles than the other. This is called applying a voltage, named after Alessandro Volta who invented the battery, a device that uses chemical differences to create a voltage. When there is a voltage applied, electrons are drawn through the material, but how many electrons flow (which determines how large the measured electrical current is) varies widely by material. In some solids where electrons move freely, it’s easy to pass a lot of electrons through, creating an electrical current. This is characteristic of a metal, where electrical conduction is easy. But in other solids, it takes a lot of energy to move electrons through so electrical conduction is difficult, and we say these materials are insulators. And there is a third class of material, a semiconductor, which can be switched between conducting and non-conducting states.

To understand what causes the different electrical behaviors, we can think about how the atomic energy states available to electrons scale up to a bulk material. Each individual atom has a set of available electron states, that can be mathematically described using quantum mechanics. Some of these states are occupied by electrons, and some are not. A collection of atoms will have a collection of states, some occupied and some free, and an electron has to have available states that it can move between in order to traverse a solid. If there are no available states, it doesn’t matter how energetically favorable it is for the electron to go somewhere else, it has no way of getting there.

For a solid made up of only one kind of atom, the electronic states in each atom will be similar but may vary slightly due to varying conditions throughout a non-perfect crystal. This means that if we sum up all the states, instead of the precisely delineated states we found in a single atom, we’ll have smeared out bands of available states and forbidden regions which give a rough approximation of what energies electrons can have. As usual, the lowest energy states will be mostly occupied, and the highest energy states will be mostly empty. It’s the states right at the top of the electron occupancy which turn out to be the most useful for conduction, because of the minimal energy cost involved in moving electrons. (The line demarcating this is called the Fermi energy or Fermi level.) And how these available electron states look when we depict them as a function of energy can be very different, as shown below:

The various bands of allowed electronic states can overlap with each other, can have a small separation in energy, or can have a large separation in energy. Overlapping bands mean that in both bands, electrons have many available states to transition to, and that is why materials with overlapping bands have high electrical conductivity. These are metals, like gold or copper. For materials with a large separation between bands, the lower energy band is completely full and the higher energy band is completely empty. If an electron in the full band is tempted to move through the material, it must first scrounge up the energy to jump up to an available state, which is so considerable a task that most electrons can’t manage it. These are insulators, which may only pass one electron for every 1030 (a billion billion billion, or more electrons than stars in the universe) that pass through a metal at the same potential.

But the most interesting case, at least as far as modern electronics is concerned, is the material with a small separation between bands, the semiconductor. Only a small energy is needed to boost an electron from the full band to the empty band, and if the energy required is provided by thermal energy at room temperature, semiconductors can have significant electrical conduction at room temperature. But the most useful semiconductors are not quite conductive under normal conditions, but can easily be turned on by applying an electric field. That means they can operate as an electrical switch, acting as a metal or an insulator depending on what’s required. Silicon is the most widely used semiconductor in the present day,

Where the bands of available states fall exactly is determined by the crystal lattice type and the interatomic spacing, two factors which are themselves determined by the outer electrons of the atoms themselves. And for amorphous solids without a periodic structure, like glass, we still get energy bands. In fact, one way to think of the transparency of glass is that visible photons entering the material do not have enough energy to excite electrons from a filled band into an empty band, so they pass through the material without interacting with it. And that’s why you can see through glass!

All the justification for band theory involves a lot of math, of course. But just the basic idea, that bulk materials have bands of available states for electrons and the energy and grouping of these states determines electrical behavior, is pretty amazing because it puts a framework around the broad variety of electrical behaviors that we see in materials in nature. And, if you want to understand how electronics work, band theory is the first big piece of that puzzle.

Spin, Rotation, and a Plate Trick

In my introduction to the quantum number spin, I mentioned that particles can have half-integer or integer spin, and that which they have deeply affects their behavior. This is not an easy statement to understand, especially without seeing the math. The allowed values for spin come from solutions to quantum mechanical energy equations. But what do differences in these values mean? How does a spin-1/2 particle behave differently than a spin-0 particle?

One major difference is in the behavior under rotation. When we try to calculate how rotation affects a particle with spin-0, we find that it doesn’t matter: the particle is indistinguishable before and after any rotation. However, a spin-1 particle requires a 360° rotation to return to its initial state, and a spin-2 particle requires a 180° rotation to return to its initial state. This may seem strange, but what it means is that the spin value describes the symmetry of the particle. If you imagine a deck of cards, the spin-2 particles are like face cards that look the same when rotated 180°. Spin-1 particles are like number cards which must be rotated 360° to look the same as they did when they started. Particles with integer spin are called bosons, after the Indian physicist Satyendra Bose.

There are no playing cards which must be rotated 720° in order to look the same, and yet this is the case with spin-1/2 particles. There are few macroscopic objects that can demonstrate this property, but one of them is your hand! Place any object on your hand, palm up, and rotate it without dropping your palm. After 360° you will find your arm to be pretty contorted, but after 720° of rotation your arm has regained its initial position! Another way to think of it is that, instead of a 360° rotation bringing the object back to its initial state, which would be like multiplying by 1, the 360° brings the object to another state like multiplying by -1, and then an additional 360° rotation multiplies by (-1)*(-1) which equals 1. Every spin-1/2 particle shares this behavior, such as quarks (the constituents of protons and neutrons) and electrons. We call these particles fermions, after the physicist Enrico Fermi.

That factor of -1 becomes important because of the idea in quantum mechanics that particles are interchangeable or identical. That is, we cannot tell one specific electron from another. Mathematically, you can state this by writing a function that describes the positions of two particles, and seeing what happens to that function when you exchange the particles. If you do this, what you find is that bosons are symmetric under particle interchange and the function stays the same, but fermions are antisymmetric under particle exchange, and the function is multiplied by -1.

This idea, that bosons are symmetric and fermions are antisymmetric under exchange of identical particles, is called the spin-statistics theorem. A thorough proof requires relativity and quantum field theory, but the fundamental cause is the differing rotational behavior due to spin as a measure of symmetry. One very important consequence of all of this is that if you have two fermions occupying the same state, and you exchange them, you find that the function describing their position cancels out to zero. This is a mathematical statement of the Pauli exclusion principle forbidding two fermions from being in the same quantum mechanical state!

On the other hand, we find that bosons are perfectly happy to all pile into the same quantum mechanical state, at least at low temperatures. This is the concept behind the Bose-Einstein condensate, the state of matter experimentally realized only 30 years ago in which bosons can be cooled into occupying the same state.

I hope this makes the connections between spin, the Pauli exclusion principle, and particle types clearer. But if nothing else, the rotational exercise with an object on your hand, better known as Feynman’s plate trick, is fun at parties.

The Rainbow of Bonds

Now that we have looked at the broader picture of what a bond is, we can go a little deeper. Bonds can be easy or hard to break, they can involve particle exchange between atoms, they can be the result of transient forces, and they can react in a variety of ways. There is a rainbow of bond types to explore, but we can focus on a few primary examples.

We’ll start with the stronger sort of bonds: those that involve direct transfer of electrons between atoms. For example, say we have two neighboring atoms, one with an empty low-energy state and one with an outer electron that’s all alone at a high-energy state. If the states are similarly shaped, both atoms can lower their overall energy when the extra electron moves to the low-energy state. The atom that gave up the electron is now positively charged, and the atom that accepted the electron is negatively charged, so there is an electrostatic force attracting them. Charged atoms are also called ions, so we say that these two atoms have an ionic bond. And it’s possible to have ionic bonds involving more than one electron, if an atom has two or three electrons to donate which another atom can accept. A common example of ionic bonding is table salt, which has a sodium atom donate an electron to a chlorine atom.

It’s also possible for two atoms to share a pair of electrons, so that the electron cloud overlaps with both atomic nuclei. If the electrons in question have oppositely aligned spins, they can have the same energy without being in the same quantum mechanical state. This is called covalent bonding. It happens most often when the two atoms in question are comparably attractive to electrons, for example if they are the same type of atom. Graphite, or pencil lead, is one form of carbon that has covalent bonds. So is graphene, the atomically thin version of graphite whose discovery (and extraordinary properties) recently garnered a Nobel prize in physics.

Ionic and covalent bonds tie atoms together very tightly, and can be linked together to form complexes with many bonded atoms. These complexes are known as molecules. But large numbers of atoms can also share electrons diffusely, so that the electrons aren’t localized to a single atom or a pair of atoms. This is called metallic bonding, so-called because delocalized electrons are found in metals. The free electrons move around the atomic nuclei like a sea moving around rocks, only weakly bound to them. The mobility that electrons have in metals is why we say that metals have high ‘electrical conductivity’: it is easy to pass an electrical current, which just consists of individual electrons, through a metal. As a special case of metallic bonding, it’s also possible to have partially delocalized electrons in small molecules, which is the basis of organic chemistry.

Another way to weakly bind atoms comes from the fact that charge is separated in an atom, between the positively charged nucleus and the negatively charged electron cloud. Imagine that the cloud is slightly distorted, by a passing electrical field or by a random fluctuation. If the electron cloud is not symmetric around the nucleus at that moment, there will be a distance between the center of the positive charge and the center of the negative charge, and a force because of the opposite charges. This is called a dipole in electromagnetism, because of the two oppositely charged poles. And if you have two next to each other, they will try to align so that the negative side of one dipole is near the positive side of the other. What starts as a small fluctuation can cause a slight reordering over a large material, because of the dipoles attempting to align. This dipole-dipole interaction is another weak form of bonding. It can happen with induced dipoles, as I’ve described, or between permanent dipoles which are common in molecules.

There is also a lone form of chemical bonding which doesn’t rely solely on electrons. The hydrogen atom, with its single proton and single electron, is pretty small and pretty reactive. So it’s actually possible for two atoms to share a third atom, hydrogen, which means that both the electron and the proton are in energy states that minimize the total system energy. The hydrogen bond is partly covalent, since the hydrogen electron is usually paired with a second electron. But the separation of the proton and electron also induces a dipole, making hydrogen bonding a dipole-dipole interaction. Hydrogen bonding may sound like a strange beast, and it is, but it is an important factor in the chemical behavior of water which is essential to life as we know it.

What is spin?

First the basics: spin is an intrinsic property of matter, like charge or mass. It is measurable in the real world by observing interactions with magnetism, and is the basis of technologies like MRI and hard disk drives!

We of course recognize the verb ‘to spin’, which means to rotate around a fixed axis the way that wheels, figure skaters, and the Earth do. But the word spin is also used to describe a fundamental property of particles. We have already talked a little about a fundamental property, charge, which was useful because a lot of the important forces at the atomic scale are electromagnetic and thus related to charge. And we remember that mass, another fundamental property, determines how matter interacts via the gravitational force. Spin is a bit different.

The idea of particles having an intrinsic spin first arose during the development of quantum mechanics, when Wolfgang Pauli and others noticed that part of the mathematical solution for particle states resembled angular motion, as if the particles were physically spinning around an axis. But unlike spinning at the macroscopic scale, quantum spin can only occur at a few discrete values: integer and half-integer multiples of ħ, the reduced Planck constant. The allowed values of spin are clustered around zero, and the ħ factor is dropped by convention because particle physicists like to make things look simple. So a photon, the quantum of light, has spin 0, whereas electrons and quarks, which make up protons and neutrons, have spin 1/2. There are also particles with spin 1, 3/2, and 2. As with charge, spin is reminiscent of a behavior we see in the macroscopic world, but its values are quantized into a few allowed values.

Spin can have one of two polarities, meaning we can have an electron with spin +1/2 and one with spin -1/2. And charged particles like the electron actually respond to magnetic fields differently if they have positive or negative spin! This is because the motion of a charged particle creates a small magnetic moment, which will be aligned in one direction for positive spin and the opposite direction for negative spin. This is the basis of the famous Stern-Gerlach experiment, in which atoms with one free electron are sorted by their spin under the influence of a magnetic field. But it’s also the basis of nuclear magnetic resonance (NMR) and magnetic resonance imaging (MRI), two related techniques for determining the composition and structure of either chemical substances or human patients! Strong magnetic fields can be used to align spins within any object, and how quickly the spins decay back to their original orientation gives information about what is inside the object. Currently, researchers are trying to build circuits that use spin instead of charge to carry information, which is called ‘spintronics’.

But at a more basic level, when we talked about chemical bonds we skipped over the importance of spin. The reason spin matters for bonding is due to the Pauli exclusion principle, the idea that no two electrons can share the same quantum state. In the development of quantum mechanics, it became clear from the data that even if all the available energy states were mathematically accounted for, there still seemed to be a degeneracy in which two electrons shared what was thought to be the same quantum state. This can be explained with a new quantum number, which we call spin. So spin is another factor of the electron cloud shape and is critical in the understanding of chemical bonding.

But there are actually even more strange things about spin than I can fit in this post, including the fact that the Pauli exclusion principle only applies to particles with half-integer spin! Half-integer and whole-integer spin particles are fundamentally different from each other, in some pretty interesting ways, but why is a story for another time!

Electrons, Bonding, and the Periodic Table

The structure of the periodic table of elements is a bit weird the first time you see it, like a castle or a cake. If we just read the periodic table top to bottom and left to right, we are reading off the elements in order of increasing number of protons. However, if this were the only useful ordering on the periodic table, it could be a simple list. The vertically aligned groups on the periodic table actually represent the chemical properties of the elements. Dmitri Mendeleev developed the table in 1869 as a way to both tabulate existing empirical results, and predict what unexplored chemical reactions or undiscovered elements might be possible. It was revolutionary as a scientific tool, but the mechanism behind the periodicity was not understood until decades later. As it turns out, the periodicity of chemical behavior corresponds to the bonding type of the outer electrons in different atoms.

To understand what that means, we can start by looking at the elements on the left side of the periodic table. Hydrogen has only one proton, so the electrically neutral form of hydrogen has only one electron. This single electron is a point particle, jumping around the nucleus. The electron exists in a probability cloud, whose shape is given by the lowest energy solution to the quantum mechanical equations describing the system. These quantum states can be distinguished by differing quantum numbers for various quantities like spin and angular momentum, and we will talk about these in more depth later on. When we add additional electrons, they all want to be in the lowest energy state as well. Sadly for electrons but happily for us, no two electrons are able to occupy the same quantum state: they must differ in at least one quantum number. This is known as the Pauli exclusion principle, and was devised to explain experimental results in the early years of quantum mechanics. So while the single electron in hydrogen gets to be in the lowest energy state available for an electron in that atom, in an atom like oxygen, its eight electrons occupy the eight lowest energy states, as if they are stones stacked in a bucket.

But what’s really interesting about these higher energy electron states is that they have different shapes, as we can see by the mathematical forms that describe the possible probability distributions for electrons. So while the electron cloud in a hydrogen atom is a sphere, there are electron clouds for other atoms that are shaped like dumbbells, spheres cut in two, alternating spherical shells, and lots of other shapes.

The electron cloud shape becomes important because two atoms near each other may be able to minimize their overall energy via electron interactions: in some configurations the sharing of one, two, more, or even a partial number of electrons is energetically preferred, whereas in other configurations sharing electrons is not favorable. This electron sharing, which changes the shape of the electron cloud and affects the chemical reactivity of the atoms involved, is what’s called chemical bonding. When atoms are connected by a chemical bond, there is an energy cost necessary to separate them. But how atoms interact depends fundamentally on the shape of the electron cloud, determining when atoms can or can’t bond to each other. So the periodic table, which was originally developed to group atoms with similar chemical properties and bonding behaviors, actually also groups atoms by the number and arrangement of electrons.

Now, there is a lot more that can be said about bonding. You can talk about the inherent spin of electrons, which is important in bonding and atomic orbital filling, or you can talk about the idea of filled electron shells which make some atoms stable and others reactive, or you can talk about the many kinds of chemical bonds. It’s a very deep topic, and this is just the beginning!

Since every real world object is a collection of bonded atoms, the properties of the things we interact with, and what materials are even able to exist in our world, depend on the shape of the electron cloud. Imagine if the Pauli exclusion principle were not true, and all the electrons in an atom could sit together in the lowest energy state. This would make every electron cloud the same shape, which would remove the incredible variety of chemical bonds in our world, homogenizing material properties. Chemistry would be a lot easier to learn but a lot less interesting, and atomic physics would be completely solved. Stars, planets, and life as we know it might not exist at all.

The Electron Cloud

There is a popular image of the atom that shows the nucleus as a collection of balls, with ball-like electrons following circular orbits around them. The parallels to our own solar system, to the orbits of the moon around the earth and the earth around the sun, strike a chord with most people, but the depiction is inaccurate. It is based on the ideas of several prominent early twentieth century physicists, developed after the discovery of the electron in 1897 showed that atoms were not the smallest building block of nature. There are two serious mistakes in this image, and the actual structure of the atom is a lot more interesting.

The first problem is that the proton itself is not an indivisible particle: it’s composed of three quarks, subatomic particles which were hypothesized in the early sixties and observed in experiments beginning in the late sixties. The same is true of the neutron: it’s also composed of three quarks, though the flavor composition is different than that of the proton. (“Flavor composition”? Yes, quarks come in different flavors.) So those giant balls in the nucleus are actually comprised of smaller particles. At this point we believe quarks to be themselves indivisible, not composed of another even smaller particle.

But the second reason this picture is incorrect is that the electron doesn’t follow a linear orbit around the proton, the way gravitationally orbiting bodies do. In fact, due to the small mass of the nucleus and the even smaller mass of the electron, gravity is the least important force in an atom. The electromagnetic force, between the oppositely charged proton and electron, is much larger than the gravitational force between these tiny objects. But wait, you might say, if there’s such a large attractive force, shouldn’t the electron just spiral into the proton? This quandary illustrates perfectly why we can’t rely on classical physics, which was built up for objects comprised of billions of atoms, for the particles within a single atom. Because yes, if we had two oppositely charged billiard balls that have a weak gravitational interaction and a strong electromagnetic interaction, they will crash into each other! But, the electron is so small and so light that we cannot treat it as a classical object.

Here is where quantum mechanics come into play. Quantum mechanics as a whole is a set of mathematical constructions used to describe quantum objects, and it’s quite different than what’s used for classical, large-scale physics. There are all sorts of interesting consequences of quantum mechanics, such as the Heisenberg Uncertainty Principle, which states that for some pairs of variables, such as energy and time or position and momentum (mass times velocity), how precisely you can measure one depends on how precisely you are measuring the other. For each variable pair, there is a basic uncertainty in the measurement of both, which is very small but becomes relevant at the quantum scale. This shared minimum uncertainty between related variables is a fundamental property of nature. For momentum and position, this leads to that old joke about Heisenberg being pulled over for speeding: the police officer asks, “Do you know how fast you were going?” and Heisenberg responds, “No, but I know exactly where I am!”

What the uncertainty principle means here is that the electron is actually incapable of staying in the nucleus. Imagine a moment in time where the electron is within the nucleus: now its position is very well known, so there is a large uncertainty in its momentum. Thus the velocity may be quite high, which means that a moment later the electron will have moved far from the nucleus. In fact, because of the uncertainty in position, we cannot ever really say where in space the electron is. It is more accurate to talk about its position as determined by a probability cloud, which is denser in places that the electron is more likely to be (near the nucleus) and less dense where it is less likely to be (far from the nucleus). This also takes into account the wave nature of the electron as a quantum object, which we’ll get into another time.

With this knowledge, we can discard that old image of an electron orbiting a nucleus. A single electron, even though it is measurable as an individual, indivisible particle, exists as a cloud around the nucleus. The shape of the cloud is described by quantum mechanics, and as we add more electrons to the atom, we will find a whole gallery of electron cloud shapes. These shapes are the heart of interatomic bonding, as we will see.

Thinking About Collections of Atoms

On a basic level, science is about asking why the world is the way it is, and engineering is about asking, how can we use that to better our condition? There is certainly a lot of interplay between the two; they inform each other and rely on each other. And in my mind, a good scientist should always be a reasonable engineer, and vice versa. So if we want to understand what the science is that underpins a lot of our current technology, we first have to ask a lot of “why” questions about the world around us. Such as, why do different objects and materials have different properties? Why are there different forms that matter can take? Why do some forms appear on Earth and some don’t? Then, once we know what makes materials different from each other, we can start talking about how to use that to do something useful.

So what is the difference between the atoms in a metal table, the atoms in a cup of coffee, and the atoms in our hands? There are two major differences that are relevant: first, the atoms themselves come in a wide variety of types, and second, they can be arranged with other atoms in many unique ways that affect the property of the resultant material. The image below shows how we can arrange the same silicon and oxygen atoms in a random way or an ordered way, to get either silica or quartz. This change in ordered affects the physical properties of the resultant material.

We touched on the many types of atoms before when we discussed the number of protons in an atom. Proton number affects electron number, because of the attractive force between protons and electrons due to their opposite charge. So, for a given number of protons, an atom will end up with a similar number of electrons. How many electrons an atom has is very important, because the cloud of electrons is much larger than the compact nucleus which contains the neutrons and protons, so electrons are the primary means by which an atom interacts with the world.

What “the world” means here is primarily other atoms. So to assemble a solid, we have lots of atoms whose electron clouds are interacting with each other. Atoms can share electrons, they can be attracted to each other if they have opposite charges, and they can form three-dimensional structures to allow many atoms to interact . These interactions are all based around electronic forces, which stem from charge as we discussed earlier. Different kinds of atoms will experience different forces in different environments, so we end up with a whole slew of ways to assemble atoms. We can pack carbon into sheets and get pencil lead, we can jam it together with no ordering and get charcoal, we can compress it until it has a dense, flawless periodic structure and get diamond, or we can mix it with hydrogen to get the long hydrocarbon chains that crude oil is made of. And that’s just carbon!

Now, the obvious question to ask here is why atomic species and ordering vary, and why those variations lead to different material types. We’ll get into the first question shortly, but the second will take a lot longer to answer.