Tag Archives: electrical engineering

The Colors of Noise

Talking about the ‘science’ of something often evokes ideas like precision, or a very controlled study where every parameter is meticulously accounted for. But the natural world is not necessarily precise or controlled; evolution gives the quickest functional solution, rather than the most elegant. And as with entropy, management of disorder and messiness is often simpler to achieve than full control. So even as we attempt to study and engineer our natural environment, we have to accept some level of randomness and chaos, or even perhaps take advantage of it as nature does.

It’s with that in mind that I’d like to talk about noise. Colloquially, noise refers to sounds, but specifically disordered or cacophonous ones. This comes from the idea of noise as a sort of randomness that’s part of a signal carrying information, like static on a phone line or a video. Those examples of noise are like random factors that affect the timbre or perception of the signal itself, but one can also think about noise more generally in nature as a perceivable, measurable result of some inherently random physical process. Such processes are called ‘stochastic’, as opposed to ‘deterministic’ processes where the probability is determined or fixed by the initial conditions.

But as you might imagine, there are lots of different physical phenomena that might lead to noise. So the noise resulting from stochastic processes has a sort of fingerprint that can be read out and may give useful information about the system the noise came from. For example, noise on a telephone line may be different from noise in a photodetector, even if both come from random things that electrons are doing in the material. But how to tell the two apart? Variations in noise are usually analysed by separating out the noise by frequency, which is to say looking at what parts of the signal occur once per second, versus twice per second, versus a hundred or a thousand times per second. This is described as the ‘power spectrum’ of the noise, but more lyrically, the most common power spectra for naturally occurring noise are called the colors of noise.

The power spectrum above shows noise that has the same power at all frequencies, which is called white noise, because of the idea that white light contains all the visible colors in equal proportions. White noise comes from thermal processes, which is to say atoms using the heat and kinetic energy around them to jump up to higher energy states and then back down. And since thermal processes don’t have any preference for one frequency of activity over another, the frequency spread of thermal noise is flat.

There’s also brown noise, which isn’t named for the color brown but rather for the scientist Robert Brown. He was a botanist who discovered that pollen grains in water moved in a random pattern when observed under a microscope, and initially could not explain the mechanism. Many decades later, Einstein explained the movement as a result of molecules in the water buffeting the pollen grains, causing a random movement of the grain itself to be visible even though the water molecules were too small to see. This movement is called Brownian motion, or a random walk, and so Brownian noise is characterized by enhanced low frequencies as you’d get in a random walk. Auditory brown noise sounds like a waterfall, softer than the other colors of noise. Here’s an animation of the pollen grains colliding with smaller water molecules, where you can see the randomness that we can call noise:

Pink noise, or flicker noise, decreases in power as the frequency of the noise increases. It’s also called 1/f noise, because the power of pink noise goes inversely as the frequency f. Pink noise comes from the trapping and detrapping of charge carriers like electrons, which can be counted as a stochastic process that is more likely at lower frequencies. Because the human ear is less sensitive to higher frequency noises, pink noise is often used as a reference signal in audio engineering.

But just as with light, our senses and processing of noise can distort it between its physical origin and our perception of it. So for example, we may perceive white noise, which has a flat power spectrum, as louder for frequencies that our ear can better detect. I mentioned before that the human ear is less sensitive to higher frequencies, but overall that sensitivity can be mapped to create a power spectrum that will appear flat given our sensory distortion. This sort of noise is called grey noise.

There are other colors of noise, such and blue and violet noise which increase in power at higher frequencies, green noise which is the selected center of the power spectrum for white noise, and black noise which is a fancy way of saying silence. You can listen to some colors of noise here. There is also a type of noise specific to small numbers of some countable event like trapping and detrapping, where the discreteness of the event becomes important. Fluctuations in a small number matter more than fluctuations in a large number. This is called shot noise, and is common in any signal that has a small number of countable events, like measuring individual electrons or photons. But noise is just a consequence of the inherent randomness in our physical world, an avatar revealing the exact physical mechanism behind its own creation.

Advertisements

Defects in the Crystal Structure of Materials

Describing phenomena from the real world mathematically has a tempting sort of elegance and simplicity, but there’s a danger of oversimplifying. You risk missing out on some of the really complex beauty that’s out there! I always experience a bit of glee when a simple explanation turns out to miss something messy but important, and for that reason I’d like to tell you about defects in crystal structure and why they matter.

Awhile back, we talked about periodic structure in solids, and how the geometric arrangement of atoms in a material determines many of the material properties. For electronics, the density of mobile charge carriers (electrons or holes) and the availability of energy states for those carriers to occupy are the two main relevant material properties. From these we can get metals, which conduct electrons easily, or insulators, which don’t, or semiconductors which are somewhere in between. Modern computing is largely based on devices built from semiconductors, connected to each other using metals, with insulators used for electrical isolation and a few other tricks.

But while it’s helpful to imagine a perfect material, with each atom positioned just where it should be in an infinitely repeating array, nature is messy. We can learn a lot from theoretical material models, but the materials we find in the real world will be different. We call any deviation from perfect crystal lattice structure a defect, and the interesting thing about defects is that they can affect material properties too, especially with nanoscale objects that can have a high ratio of “defect” atoms to “normal” atoms.

There are a few different kinds of defects one finds in natural materials. For example, an atom might be missing from a lattice site which is called a vacancy, or there might be an extra atom where there shouldn’t be which is called an interstitial. There could be a whole plane of atoms that just starts in the middle of the crystal, which is called an edge dislocation. Or two regions with different orientations of their crystal structure might be pressed up against each other, which is called a grain boundary. There are lots of ways that a crystal lattice can get messed up, but remember, for that material, the native crystal lattice is usually the minimal energy solution for packing a bunch of atoms together with those properties. So the most probable outcome for any given region is a perfect lattice, but the material will end up with some deviations from this as well. How many defects there are depends on how energetically favourable the lattice is, but also on how the material was made: if you solidify a crystal really slowly, the atoms are more likely to find the true lowest energy position (sites in the perfect lattice) and there are fewer defects.

These defects all matter because they affect material properties, so if we assume that a material is going to behave as it would with a perfect crystal lattice, we are in for a surprise! If we’re focused on practicalities, like whether the material is as electrically conductive as we think it should be, then the type and concentration of defects in the device is very important, because some defects will reduce current flow and others will enhance it. To understand why, let’s think about the physical mechanisms for current flow through a material.

In a crystal, we have an array of positive nuclei with some tightly bound electrons that are bound to them; we can call these ‘localized’ electrons because they are staying in the same approximate location around the nucleus. Electrons that were more loosely bound to the original atoms are able to move freely through the crystal; when these ‘delocalized’ electrons move through the material, that is what we can measure as current flow. But a large part of the reason we can even have delocalized electrons is due to the periodicity of the lattice, which mathematically allows delocalized wave states for the electrons. When this periodicity is broken, as it is by dislocations and grain boundaries, the wave states can be disrupted, and new bound states that localize electrons to the defect are created. These are often called ‘charge traps’ because they trap the charge carriers in a specific place. However, if we ‘dope’ a material by carefully adding lots of interstitial atoms that have an extra electron, we can avoid disrupting the wave states for electrons, but add more electrons total, which actually increases the conductivity of the material. So in device manufacture, controlled doping is a common use of defects to affect material property, but dislocations and grain boundaries are often avoided because their effect on the material property is usually undesirable.

In nanoscience defects are even more important, because with a nanoscale object that’s just hundreds of atoms across instead of hundreds of billions, the percentage of atoms that are in non-ideal crystal sites can get pretty high. For example, many surface atoms end up as defects that trap charge, and a nanocrystal may be 30% surface atoms! So lots of nanoelectronics research is focused on minimizing the negative impact of defects. However, there’s an interesting contradiction there: the fascinating thing about nanomaterials is their quantum-like properties compared to bulk materials. But quantization  in nanomaterials stems largely from the greatly increased number of grain boundaries and surface atoms and defects, because these things all increase as you scale down the size of the material. So there’s a careful balance between mitigating the negative effects of defects, and not getting rid of the nanoscale-specific behaviors that make nanoelectronics interesting in the first place!

What is a memristor?

One of the most hyped devices of the past few years has been the memristor, and in comparison to other circuit elements I think you’ll agree it’s pretty weird, and interesting!

The existence of the memristor was first predicted in 1971 by Leon Chua. The behavior of basic circuit elements had long been described mathematically, with each circuit element having a given relationship between two of four quantities: charge (q), current (I), voltage (V), and flux (φ). Chua noticed that the mathematical equations could be tabulated and had a symmetry of sorts, but that one equation seemed to be missing, for a device that related current to a change in magnetic flux.  The behavior of such a device would change depending on how much current had passed through it, and for this reason Chua called it a ‘memristor’, a contraction of ‘memory’ and ‘resistor’. You can see the mathematical relationships represented below as the edges of a tetrahedron. Resistor behavior is quantified by the resistance R and capacitor behavior is quantified by the capacitance C, in the two equations near the top. And on either side, we have relations for the inductance L and the memristance M. It’s not crucial to understand these equations intimately, just to see that they have a certain symmetry and completeness to them as a set of relations between these four key variables. Five of these relationships had been experimentally observed in devices, and mathematically suggested the sixth, the memristance equation on the right. But having the equation doesn’t tell you how to make a device that will exhibit that behavior!

Chua’s initial proposals to physically create a memristor used external power to store the remembered information, making the component active rather than passive. However, in 2008 a real-world memristor was created using a nanoscale film with embedded free charge that could be moved by applying an electric field that exerts a force on the charge. How much of the film contains the extra charge determines how much of the device has high resistance and low resistance, which causes the total resistance to depend on how much current has passed through.

This isn’t the only implementation of a memristor available, because as many researchers realized once the first results were announced, memory of past measurements is a common nanoscale feature. Current flow can often cause small changes in materials, but while these changes may not be noticeable in the properties of a bulk material, when the material has very small thickness or feature size, the changes can affect material properties in a measurable way. Since this constitutes a form of memory that lasts for a significant amount of time, and there is a large market for non-volatile electronic memory for computers, the commercial interest in these devices has been considerable. HP expects to have their version memristor-based computer memory on the market by 2014, and it remains to be seen what other novel electronics may come from the memristor.

What are ‘digital’ electronics?

Why is the present moment called “the digital age”? What does it mean to have digital electronics, as opposed to analog? What’s so useful about digital?

In present-day electronics, the bulk of the calculations and computing are done on digital circuits, hence the “digital age” moniker. To get into what this means, we have to take a look back at the early development of calculating machines that used electronic signals. There are lots of components you can use in an electronic circuit, and with some basic resistors and capacitors you can start to build circuits that add one voltage to another, or perform other simple mathematical calculations. Early electronic calculators were made to mimic physical calculators and controllers that operated using gears or other mechanisms, such as the Antikythera mechanism, astrolabes, or even slide rules. Most physical calculators are analog, meaning that they apply an operation, like addition, to a set of numbers that can be anywhere along a continuous spectrum. Adding 3 to 6 and getting 9 is an analog operation. But it turns out that analog electronics have a reliability problem:  any variation in the output, which could be due to changes in the circuit’s environment, degradation in the circuit components, or leaking of current out of the circuit, will be indistinguishable from the actual signal. So if I add 3 volts to 6 volts, but I lose half a volt, I’ll get 8.5 volts and have no way of knowing whether that’s the answer I was supposed to get or not. For some applications, this isn’t an issue if the person using the electronics is able to calibrate the circuit before operation, or if you have some method of double-checking your result. But if you want to build consumer electronics, where the user is not an expert, or electronics that can reliably operate without adjustment somewhere inaccessible, reliability is a huge issue.

But what if, instead of having a continuum of possible values, we instead use a small number of discrete values? This is called digital computation after the digits of the hand, which are frequently used to count the integers from 1 to 10. Digital computing deals in only two states: on and off, also known as true and false, or 0 and 1. It allows us to hide all the messiness of the analog world by assigning on and off to a wide range of voltage values: for example, we could have 0-2V mean off, and 3-5 V mean on. Now we have a very large margin of error in our voltage ranges, so a little leakage or signal degradation will not affect what we read as the circuit output. The graph below shows how an oscillating analog signal would look as a digital signal.

Physically, there are several ways to build a system that can switch between on and off. The first one that gained a foothold in technological use was the vacuum tube, which is somewhat similar to an incandescent light bulb. In a vacuum tube, a filament can be heated to emit electrons, which are collected at an electrode nearby. The electrons are passed through a region of empty vacuum to get from the filament to the electrode, hence the name, and a control grid induces an electric field that can change how much current passes through the vacuum tube. Early computers had to have one of these vacuum tubes for each switch, hence the massive size of the first computers which easily filled a room or several rooms. If you read science fiction from the vacuum tube era, computers play a big role, quite literally since people assumed that any computer that had the power to rival the human brain would have to be large enough to fill huge underground caverns.

The development of silicon as a material for electronics changed everything. Silicon can be turned on or off as a conductor of electrons simply by applying a voltage, and it can be manufactured so that the switch size is much smaller than the smallest vacuum tube. The scale of the smallest features you can make from silicon has been decreasing for decades, which means we are building computational chips with more and more switches in a given area.

But, one tricky thing about moving to digital logic is that the best math for these calculations is not much like our everyday math. Fortunately, a construction for doing digital calculations was developed in the mid-nineteenth century by George Boole. More on that next time!

Active and Passive Circuit Components

Now that we have several different circuit components under our belts, it’s helpful to try to classify the behavior that we’ve seen so far. Resistors, capacitors, and inductors respond in a reliable way to any applied voltage that induces an electric field. Resistors dissipate heat, capacitors store charge, and inductors store magnetic flux.  These responses always occur and cannot be manipulated without manipulating the very structure of the material which causes the response. They don’t add energy or electrons to a circuit, but merely redirect the electrons provided by an external source. Thus these are called “passive circuit components”.

Transistors also have a predictable response to a given voltage, but that response can be changed by tuning the gate voltage in order to open or close the conducting channel. Effectively, the transistor can be in one of two states:

  1. Functioning like a wire with a small resistance, passing most current through while dissipating a small amount of heat.
  2. Functioning like an insulator with a high resistance, blocking most current and dissipating more heat.

The controlling gate which allows us to pick between these two states can actually add energy to the system, increasing the current output, thus the transistor is called an “active circuit component”. Circuits that do calculations or perform operations are usually a combination of active and passive circuit components, where the active components add energy and act as controls, whereas the passive components process the current in a predetermined way. There are other system analogues to this, such as hydrodynamic machines. Instead of controlling the flow of electrons, we can control the flow of water to provide energy, remove waste products, and even perform calculations. An active component would be a place where water was added or accelerated, whereas a passive component might be a wheel turned by the water or a gate that redirects the water. But in electronics, with electrons as the medium, active components add energy and passive components modify existing signals.

Electronics: The Bigger Picture

In our exploration of electronics, we started at the atomic level with the fundamental properties of subatomic particles. We looked at emergent properties of collections of atoms, like the origins of chemical bonding and electronic behavior of materials. Recently we have started to move up in scale, seeing that individual circuit components affect the flow and storage of electrons in different ways. At this point I think it is worthwhile to take a step back and look at the larger picture. While individual electrons are governed by local interactions that minimize energy, we can figure out global rules for a circuit component that tell us how collections of electrons are affected by a resistor or some other building block, creating the macroscopic quantity we call current. From there we can create collections of circuit components that perform various operations on the current passing through them. These operations can again be combined, and where we may have started with a simple switch, we can end up with a computer or a display or a control circuit.

One way to picture it is like a complex canal system for water: we have a resource whose movement we want to manipulate, to extract physical work and perhaps perform calculations. At a small scale, we can inject dye into a bit of water and watch its progress through the system as it responds to local forces. But we can look at water currents at a larger scale by adding up the behavior of many small amounts of water. In fact, scale is a type of context, a lens through which a system can look quite different! Electrical engineers who design complex circuits for a living tend to work at a much higher level of abstraction than do scientists working on experimental electronic devices. The electrical engineers have to be able to imagine and simulate the function of impressive numbers of transistors, resistors, and other components, as shown below. Whereas a device physicist focuses on the detailed physics in a single circuit component, to learn what its best use might be. They are each working with the same system, but in different and complementary ways.

When I first started writing here, I talked about science as a lens through which we can view the world: a set of perspectives that let us see the things around us in a different way than we are used to. But there are lots of different worldviews and perspectives within science, depending on scale as well as other contexts. A discussion of electrical current, for example, could be handled quite differently depending on whether electrons are moving through a polar solvent like water, or synapses in the brain, or a metal wire connecting a capacitor to an inductor. Scientists who have trained in different fields like physics, chemistry, or biology can imagine very different contexts for discussions of the same phenomenon, so that even when the fundamental science is the same, the narrative and implications may change between contexts.

But in the end, whether you are a scientist or just interested in science, it helps to know not only that an electron is a tiny charged particle, but also how it behaves in electronic circuits, in chemical bonds between atoms, and in biological systems. And to know that it’s possible to build computers out of gears, billiard balls, or even crabs! But the size and properties of electronic computers have led them to dominate, at least for now.

The Many Roads from P-N Junctions to Transistors

When I called p-n junctions the building blocks of digital electronics, I was referring to their key role in building transistors. A transistor is another circuit element, but it is active, meaning it can add energy to a circuit, instead of passive like resistors, capacitors, or inductors which only store or dissipate charge. The transistor has an input where current enters the device and an output where current leaves, but also has a control electrode which can be used to modify the transistor’s function. Transistors can act as a switch, an amplifier, and can change the gain of a circuit (i.e. how many electrons come out compared to how many went in). So where did the transistor come from, and how do you build one?

The earliest devices which acted as transistors were called ‘triodes’, for their three inputs, and were made using vacuum tubes. A current could be transmitted from one electrode to the other, across the airless vacuum inside the tube. But applying a voltage to the third electrode induces an electric field which diverts the current, meaning that the third electrode can be used as a switch to turn current on and off. Triodes were in wide use for the first half of the twentieth century, and enabled many radio and telephone innovations, and in fact are still used in some specialty applications that require very high voltages. But they are quite fragile and consume a lot of power, which is part of what pushed researchers to find alternate ways to build a transistor.

Recall that the p-n junction acts as a diode, passing current in one direction but not the other. Two p-n junctions back to back, which could be n-p-n or p-n-p, will pass no current in any direction, because one of the junctions will always block the flow of charge. However, applying a voltage to the point where the p-n junctions are connected modifies the electric field, allowing current to pass. This kind of device is called a bipolar junction transistor (or BJT), because the p-n junction diodes respond differently to positive voltage than to negative voltage which means they are sensitive to the polarity of the current. (Remember all those times in Star Trek that they tried reversing the polarity? Maybe they had some diodes in backward!) The input of a bipolar junction transistor is called the collector, the output is called the emitter, and the region where voltage is applied to switch the device on is called the base. These are drawn as C, E, and B in the schematic shown below.

Bipolar Junction Transistor

Looking at the geometry of a bipolar junction transistor, you might notice that without the base region, the device is just a block of doped semiconductor which would be able to conduct current. What if there were a way to insert or remove a differently doped region to create junctions as needed? This can be done with a slightly different geometry, as shown below with the input now marked S for source, the output marked D for drain, and the control electrode marked G for gate. Applying a voltage to the gate electrode widens the depletion region at the p-n interface, which pinches off current by reducing the cross-section of p-type semiconductor available for conduction. This is effectively a p-n-p junction where the interfaces can be moved by adjusting the depletion region. Since it’s the electric field due to the gate that makes the channel wider or narrower, this device is called a junction field-effect transistor, or JFET.

Junction Field Effect Transistor

Both types of junction transistor were in widespread use in electronics from about 1945-1975. But another kind of transistor has since leapt to prominence. Inverting the logic that lead us to the junction field effect transistor, we can imagine a device geometry where an electric field applied by a gate actually creates the conducting region in a semiconductor, as in the schematic below. This device is called a metal-oxide-semiconductor field-effect transistor (or MOSFET), because the metal gate electrode is separated from the semiconductor channel by a thin oxide layer. Using the oxide as an insulator is pretty clever, because interfaces between silicon and its native oxide have very few places for electrons to get stuck, compared to the interfaces between silicon and other insulating materials. This means that the whole device, with oxide, p-type silicon, and n-type silicon, can be made in a silicon fabrication facility, many of which had already been built in the first few decades of the electronics era.

These two advantages over junction transistors gave MOSFETs a definite edge, but one final development has cemented their dominance. The combination of an n-channel MOSFET and a p-channel MOSFET together enable the creation of an extremely useful set of basic circuits. Devices built using pairs of one n-channel and one p-channel MOSFET working together are called CMOS, as shorthand for complementary metal-oxide-semiconductor, and have both lower power consumption and increased noise tolerance when compared to junction transistors. You might be asking, what are these super important circuits that CMOS is the best way of building? They are the circuits for digital logic, which we will devote a post to shortly!