Tag Archives: electronics

Hotwiring the Brain

The most complex electrical device we possess isn’t in our pockets, it’s in our heads. Ever since Emil du Bois-Reymond’s discovery of electrical pulses in the brain and Santiago Ramón y Cajal’s realisation that the brain was composed of separate pieces, called neurons, we have known that the brain’s behaviour stems from its independent electrical parts. Many scientists are studying electronic implants that can affect how our brains think and learn. New research on conducting polymers that work well in the body may bring us one step closer to the ability to manually overhaul our own brains.

Ramón y Cajal’s drawings of two types of neurons in 1899.

Ramón y Cajal’s drawings of two types of neurons in 1899.

The immediate brain health implications of plugging electronics into the brain, even with a very basic level of control, would be astounding. The connections in the brain can adapt in response to their environment, forming the basis of learning. This ‘plasticity’ means the brain could adapt in response to implanted electronics, for example by connecting to prosthetic limbs and learning to control them. Implantable electrodes which can excite or inhibit neural signals could also be used for treatments of disorders stemming from bad neural patterns, such as epilepsy and Parkinson’s disease.Since the 1970s, brain-computer interfaces have been studied intensively. Passive electrodes which can record brain waves are already in widespread medical use. Invasive but accurate mapping of brain activity can be done by cutting the skull open, as neurosurgeons do during surgery to avoid tampering with important areas. Less invasive methods like electroencephalography (EEG) are helpful but more sensitive to noise and unable to distinguish different brain regions, not to mention individual neurons. More active interfaces have been built for artificial retinas and cochleas, though the challenge of connecting to the brain consistently and for a long time makes them a very different thing from our natural eyes and ears. But what if we could directly change the way the brain works, with direct electronic stimulation?

However, current neural electrodes made from metal cause problems when left in the brain long term. The body views foreign bodies in the brain as a problem and over time protective cells work to minimize their impact. This immune response not only damages the brain region around the electrode, it actually works to encapsulate the electrode, insulating it electrically from the brain and removing its purpose in being there.

These issues arise because of how hard and unyielding metal is compared to tissue, as well as the defense mechanisms in the body against impurities in metal. Hypoallergenic metals are used to combat this issue in piercings and jewelry, but the brain is yet more sensitive than the skin to invasive metals. A new approach being researched by scientists is the use of conducting polymers to either coat metal electrodes or to even comprise them, removing metal from the picture altogether.

Conducting polymers are plastics, which are more soft and mechanically similar to living tissue than metal. Additionally, they conduct ions (as do neurons in the brain) and are excellent at transducing these to electronic signals, giving high sensitivity to neural activity. Researchers at the École des Mines de Saint-Étienne in France have now demonstrated flexible, implantable electrodes which can be used to directly sense of stimulate brain activity in live rats, without the immune reaction plaguing metal electrodes.

It’s a big step from putting organic electronics in the brain and reading out activity to uploading ourselves to the cloud. But while scientists work on improving resolution in space and time in order to fully map a brain, there is already new hope for those suffering from neurodegenerative diseases, thanks to the plasticity of the brain and the conductivity of plastic.

Happy Ada Lovelace Day!

Today is Ada Lovelace Day, a day of blogging about women in science! (Not necessarily blogging by women in science, which is every day here.) The day is named for Ada Lovelace, who was an important figure in the nascent days of computer science, back in the 1800s when it was more of a theoretical math field concerned with the creation of calculation engines. The idea of Ada Lovelace Day is to write about a woman in science, technology, engineering, or math, which raises awareness of all the great women, both now and in the past, who have done amazing things in the STEM fields. There will be lots of stories about various role models over at the official site once the day is concluded, but the scientist I wanted to tell you all about here is Mildred Dresselhaus.

For the last twenty years or so, materials made from carbon have been getting exponentially more and more attention. Carbon is an essential building block in many of the chemicals that are important for life, but there are also huge differences between materials made from carbon depending on how the carbon is bonded. Diamonds and coal are both forms of carbon, but with wildly different crystal structure. So many of the hot carbon materials from recent years have come from new ways that the carbon atoms can be arranged. For example, carbon nanotubes are like rolled up sheets of carbon, and graphene is a sheet of carbon that’s only one atom thick. Both carbon nanotubes and graphene have very high mechanical strength, electrical and thermal conductivity, and low permeability for their size. And there are a lot of other ways carbon can be nanostructured, collectively referred to as allotropes of carbon. You can see some of them in the image below, such as (a) diamond, (b) graphite (multiple sheets of graphene),  and (h) a carbon nanotube.

But Dresselhaus was into carbon before it was cool, and has been a professor at MIT since the 60s studying the physics of carbon materials. Her work has focused on the thermal and electrical properties of nanomaterials, and the way in which energy dissipation is different in nanostructured carbon. Her early work focused on difficult experimental studies of the electronic band structure of carbon materials and the effects of nanoscale confinement. And she was able to theoretically predict the existence of carbon nanotubes, some of their electronic properties, and the properties of graphene, years before either of these materials were prepared and measured. Her scientific achievements are extremely impressive, and she has gotten a lot of honors accordingly.

And as you can imagine, things have changed a lot for women in science over the course of her career. When she began at MIT, less than 5% of students were female, and these days it’s more like 40%. But of course, it helps female students quite a bit to see female role models, like Dresselhaus. Which is the entire point of Ada Lovelace Day!

You can read an interview with Mildred Dresselhaus here, and more about her scientific achievement here.

Defects in the Crystal Structure of Materials

Describing phenomena from the real world mathematically has a tempting sort of elegance and simplicity, but there’s a danger of oversimplifying. You risk missing out on some of the really complex beauty that’s out there! I always experience a bit of glee when a simple explanation turns out to miss something messy but important, and for that reason I’d like to tell you about defects in crystal structure and why they matter.

Awhile back, we talked about periodic structure in solids, and how the geometric arrangement of atoms in a material determines many of the material properties. For electronics, the density of mobile charge carriers (electrons or holes) and the availability of energy states for those carriers to occupy are the two main relevant material properties. From these we can get metals, which conduct electrons easily, or insulators, which don’t, or semiconductors which are somewhere in between. Modern computing is largely based on devices built from semiconductors, connected to each other using metals, with insulators used for electrical isolation and a few other tricks.

But while it’s helpful to imagine a perfect material, with each atom positioned just where it should be in an infinitely repeating array, nature is messy. We can learn a lot from theoretical material models, but the materials we find in the real world will be different. We call any deviation from perfect crystal lattice structure a defect, and the interesting thing about defects is that they can affect material properties too, especially with nanoscale objects that can have a high ratio of “defect” atoms to “normal” atoms.

There are a few different kinds of defects one finds in natural materials. For example, an atom might be missing from a lattice site which is called a vacancy, or there might be an extra atom where there shouldn’t be which is called an interstitial. There could be a whole plane of atoms that just starts in the middle of the crystal, which is called an edge dislocation. Or two regions with different orientations of their crystal structure might be pressed up against each other, which is called a grain boundary. There are lots of ways that a crystal lattice can get messed up, but remember, for that material, the native crystal lattice is usually the minimal energy solution for packing a bunch of atoms together with those properties. So the most probable outcome for any given region is a perfect lattice, but the material will end up with some deviations from this as well. How many defects there are depends on how energetically favourable the lattice is, but also on how the material was made: if you solidify a crystal really slowly, the atoms are more likely to find the true lowest energy position (sites in the perfect lattice) and there are fewer defects.

These defects all matter because they affect material properties, so if we assume that a material is going to behave as it would with a perfect crystal lattice, we are in for a surprise! If we’re focused on practicalities, like whether the material is as electrically conductive as we think it should be, then the type and concentration of defects in the device is very important, because some defects will reduce current flow and others will enhance it. To understand why, let’s think about the physical mechanisms for current flow through a material.

In a crystal, we have an array of positive nuclei with some tightly bound electrons that are bound to them; we can call these ‘localized’ electrons because they are staying in the same approximate location around the nucleus. Electrons that were more loosely bound to the original atoms are able to move freely through the crystal; when these ‘delocalized’ electrons move through the material, that is what we can measure as current flow. But a large part of the reason we can even have delocalized electrons is due to the periodicity of the lattice, which mathematically allows delocalized wave states for the electrons. When this periodicity is broken, as it is by dislocations and grain boundaries, the wave states can be disrupted, and new bound states that localize electrons to the defect are created. These are often called ‘charge traps’ because they trap the charge carriers in a specific place. However, if we ‘dope’ a material by carefully adding lots of interstitial atoms that have an extra electron, we can avoid disrupting the wave states for electrons, but add more electrons total, which actually increases the conductivity of the material. So in device manufacture, controlled doping is a common use of defects to affect material property, but dislocations and grain boundaries are often avoided because their effect on the material property is usually undesirable.

In nanoscience defects are even more important, because with a nanoscale object that’s just hundreds of atoms across instead of hundreds of billions, the percentage of atoms that are in non-ideal crystal sites can get pretty high. For example, many surface atoms end up as defects that trap charge, and a nanocrystal may be 30% surface atoms! So lots of nanoelectronics research is focused on minimizing the negative impact of defects. However, there’s an interesting contradiction there: the fascinating thing about nanomaterials is their quantum-like properties compared to bulk materials. But quantization  in nanomaterials stems largely from the greatly increased number of grain boundaries and surface atoms and defects, because these things all increase as you scale down the size of the material. So there’s a careful balance between mitigating the negative effects of defects, and not getting rid of the nanoscale-specific behaviors that make nanoelectronics interesting in the first place!

Nanowires, Memristors, and the Brain

I haven’t gone into much detail about what I work on here, though from the topics I pick to write about, you could probably guess that it involves electronics, materials science, and interesting technological applications. But a science communication student at my workplace made a short video about my project, which features me and a couple of my coworkers explaining what’s cool about nanowires, memristors, and the brain, all in less than four minutes. Check it out!

A Quick Introduction to Photonics

Last time when we talked about CCDs, we were concerned with how to take an optical signal, like an image, and convert it to an electronic signal. Then it can be processed, moved, and stored using electronics. But there is an obvious question this idea raises: why is the conversion to electronic signal needed? Why can’t we process the optical signal directly? Is there a way to manipulate a stream of photons that’s analogous to the way that electronic circuits manipulate streams of electrons?

The answer is yes, and the field dealing with optical signal processing is called photonics. In the same way that we can generate electronic signals and manipulate them, signals made up of light can be generated, shuffled around, and detected. While the underlying physical mechanisms are different from those in electronics, much of the same processing can take place! There are a lot of cool topics in photonics, but let’s go over some of the most basic technology just to get a sense for how it all works.

The most common way to generate optical signals in photonics is by using a laser diode, which is actually another application of the p-n junction. Applying a voltage across the junction itself causes electrons to drift into the junction from one side, while holes (which are oppositely charged) drift in from the other side. This “charge injection” results in a net current flow, but it also means that some electrons and holes will meet in the junction. When this happens, they can recombine if the electron falls into the empty electron state that the hole represents. But there is generally an energy difference between the free electron and free hole state, and this energy can then be emitted as a photon. This is how light with a specific energy is generated in the semiconductor laser diode, and when the junction is attached to an enclosed area to amplify that light, you get a very reliable light source that is easy to modulate in order to encode a signal.

But how do you send that signal anywhere else? Whereas electronic signals pass easily through metal wires, photonic signals are commercially transmitted through transparent optical fibers (hence the term “fiber optic”). Optical fibers take advantage of total internal reflection, a really cool phenomenon where for certain angles at an interface, all incident light is reflected off the interface. Since light is a quantized electromagnetic wave, how it moves through its surroundings depends on how easy it is to make the surrounding medium oscillate. Total internal reflection is a direct consequence of Snell’s Law, which describes how light changes when it goes between media that are not the same difficulty for light to pass through (the technical term for this is refractive index). So optical fibers consist of a fiber with high refractive index which is clad in a sheath with lower refractive index, tuned so that the inner fiber will exhibit total internal reflection for a specific wavelength of light. You can see an example of total internal reflection below, for light travelling through a plastic surrounded by air. When optical fibers exhibit total internal reflection, they can transmit photonic signals over long distances, with less loss than an electronic signal moving through a long wire would experience, as well as less susceptibility to stray electromagnetic fields.

Photonic signals can then be turned back into electronic signals using semiconducting photodetectors, which take advantage of the photoelectric effect. This technology is the basis of most modern wired telecommunications, including the Internet!

But if you are remembering all the electronic components, like resistors and capacitors and transistors, which we use to manipulate electronic signals, you may be wondering what the corresponding parts are for photonics. There are photonic crystals, which have microstructure that affects the passage of light, of which opal is a naturally occurring example! And photonic signals can be recorded and later read out on optical media like CDs and DVDs. But in general, the commercial possibilities of optical data transmission have outweighed those of complex photonic signal analysis. That’s why our network infrastructure is photonic but our computers, for now, are electronic. However, there are lots of researchers working in this area, so that could change, and that also means that if you find photonics interesting there is much more to read!

The Electronic Eye: Charge-Coupled Devices

Now that we’ve looked at some of the basic interactions between electrons and light, we can turn our focus to electronic photodetectors: devices that can sense light and respond by producing a current. A simple example is semiconductor photomultipliers, like we talked about last time, which are able to very sensitively measure the intensity of light that impacts the photomultiplier.

But what do we do if we want to record an image: a two-dimensional map of the intensity of incident light? In traditional photography,  silver halide crystals on photographic film interact with incident photons, then the chemical development process causes the altered crystals to darken. Since semiconductors generate a number of electrons proportional to the number of incident photons, you might think it would be easy to develop a similar digital process. But the major issue for digital imaging was not so much sensitivity as signal processing: if you have a thin film which is electrically responding to light, how do you read out the resultant electronic signal without losing spatial resolution?

Because of these difficulties, early electronic imaging was done using vacuum tubes, a bulky but effective technology we’ve discussed several times before. Many researchers were looking for a practical means of imaging with semiconductors, but the major breakthrough came in 1969, when Boyle and Smith had the basic idea for structuring a semiconductor imaging device in what’s now called the charge-coupled device (CCD).

To retain spatial resolution in an image, the photoresponsive semiconductor in a CCD is divided into an array of capacitors, each of which can store some amount of charge. One way to picture it is as an array of wells, where a rainstorm can dump a finite amount of water into any wells under it, and that water remains separate from the water in neighboring wells. But in a CCD, as photons enter the semiconductor and generate electrons, electrodes at different voltages create potential wells to trap the electrons. These wells define what we call pixels: the smallest possible subdivisions of the image.

However, to be able to make an image we also need to be able to measure how much has accumulated in each well. In our device, this means moving the electrons to an amplifier. But how can we transfer those wells of electrons without letting them mix with each other (which would blur the image) or leak out of the device altogether? To accomplish this, we need the wells confining the electrons to be mobile! But remember that the wells themselves are defined by applying voltages to a patterned array of electrodes. This means moving the well is possible, by lowering the potential directly in front of a well and raising the potential directly behind it. This idea is illustrated below for a system with three coupled potential arrays, a silicon oxide insulating layer, and an active region of p-doped silicon.

You can imagine that, instead of our previous array of wells to map rainfall, we have an array of buckets and a brigade of volunteers to pass the buckets to a measurement point. The principle is sometimes called a bucket brigade, and the general method of moving electronic outputs forward is termed a shift register. The physical implementation in CCDs, using voltages which are cycling high and low, is called clocking.

In general, the charge in a CCD will be transferred down the columns of the detector and then read out row by row by an electronic amplifier, which converts charge to voltage. Since this is a serial process, if the efficiency of transferring charge from one pixel to the next is 99%, then after moving through 100 pixels only 36% of the electrons will be left! So for a 10 megapixel camera, where the charge may pass through as many as 6,000 pixels before being measured, the charge transfer efficiency has to be more like 99.9999%! Historically, this was first achieved by cooling the CCDs using liquid nitrogen to reduce thermal noise, a practical approach for detectors on spacecraft but one that initially limited commercial applications. But eventually CCDs were made with decent transfer efficiency at room temperature, and this has been the main technological factor behind the development of digital photography. CCDs themselves don’t distinguish between different colors of photons, but color filters can be placed over different pixels to create a red channel, a green channel, and a blue channel that are recombined to make a color image. CCDs are the image sensors in all of our digital cameras and most of our phone cameras, and it was partly for this enormous technological impact that Boyle and Smith received the Nobel Prize in 2009.

There are a lot more cool details about the function of CCDs over at the wikipedia page, and many researchers are still finding ways to improve CCDs!

What are ‘digital’ electronics?

Why is the present moment called “the digital age”? What does it mean to have digital electronics, as opposed to analog? What’s so useful about digital?

In present-day electronics, the bulk of the calculations and computing are done on digital circuits, hence the “digital age” moniker. To get into what this means, we have to take a look back at the early development of calculating machines that used electronic signals. There are lots of components you can use in an electronic circuit, and with some basic resistors and capacitors you can start to build circuits that add one voltage to another, or perform other simple mathematical calculations. Early electronic calculators were made to mimic physical calculators and controllers that operated using gears or other mechanisms, such as the Antikythera mechanism, astrolabes, or even slide rules. Most physical calculators are analog, meaning that they apply an operation, like addition, to a set of numbers that can be anywhere along a continuous spectrum. Adding 3 to 6 and getting 9 is an analog operation. But it turns out that analog electronics have a reliability problem:  any variation in the output, which could be due to changes in the circuit’s environment, degradation in the circuit components, or leaking of current out of the circuit, will be indistinguishable from the actual signal. So if I add 3 volts to 6 volts, but I lose half a volt, I’ll get 8.5 volts and have no way of knowing whether that’s the answer I was supposed to get or not. For some applications, this isn’t an issue if the person using the electronics is able to calibrate the circuit before operation, or if you have some method of double-checking your result. But if you want to build consumer electronics, where the user is not an expert, or electronics that can reliably operate without adjustment somewhere inaccessible, reliability is a huge issue.

But what if, instead of having a continuum of possible values, we instead use a small number of discrete values? This is called digital computation after the digits of the hand, which are frequently used to count the integers from 1 to 10. Digital computing deals in only two states: on and off, also known as true and false, or 0 and 1. It allows us to hide all the messiness of the analog world by assigning on and off to a wide range of voltage values: for example, we could have 0-2V mean off, and 3-5 V mean on. Now we have a very large margin of error in our voltage ranges, so a little leakage or signal degradation will not affect what we read as the circuit output. The graph below shows how an oscillating analog signal would look as a digital signal.

Physically, there are several ways to build a system that can switch between on and off. The first one that gained a foothold in technological use was the vacuum tube, which is somewhat similar to an incandescent light bulb. In a vacuum tube, a filament can be heated to emit electrons, which are collected at an electrode nearby. The electrons are passed through a region of empty vacuum to get from the filament to the electrode, hence the name, and a control grid induces an electric field that can change how much current passes through the vacuum tube. Early computers had to have one of these vacuum tubes for each switch, hence the massive size of the first computers which easily filled a room or several rooms. If you read science fiction from the vacuum tube era, computers play a big role, quite literally since people assumed that any computer that had the power to rival the human brain would have to be large enough to fill huge underground caverns.

The development of silicon as a material for electronics changed everything. Silicon can be turned on or off as a conductor of electrons simply by applying a voltage, and it can be manufactured so that the switch size is much smaller than the smallest vacuum tube. The scale of the smallest features you can make from silicon has been decreasing for decades, which means we are building computational chips with more and more switches in a given area.

But, one tricky thing about moving to digital logic is that the best math for these calculations is not much like our everyday math. Fortunately, a construction for doing digital calculations was developed in the mid-nineteenth century by George Boole. More on that next time!

Active and Passive Circuit Components

Now that we have several different circuit components under our belts, it’s helpful to try to classify the behavior that we’ve seen so far. Resistors, capacitors, and inductors respond in a reliable way to any applied voltage that induces an electric field. Resistors dissipate heat, capacitors store charge, and inductors store magnetic flux.  These responses always occur and cannot be manipulated without manipulating the very structure of the material which causes the response. They don’t add energy or electrons to a circuit, but merely redirect the electrons provided by an external source. Thus these are called “passive circuit components”.

Transistors also have a predictable response to a given voltage, but that response can be changed by tuning the gate voltage in order to open or close the conducting channel. Effectively, the transistor can be in one of two states:

  1. Functioning like a wire with a small resistance, passing most current through while dissipating a small amount of heat.
  2. Functioning like an insulator with a high resistance, blocking most current and dissipating more heat.

The controlling gate which allows us to pick between these two states can actually add energy to the system, increasing the current output, thus the transistor is called an “active circuit component”. Circuits that do calculations or perform operations are usually a combination of active and passive circuit components, where the active components add energy and act as controls, whereas the passive components process the current in a predetermined way. There are other system analogues to this, such as hydrodynamic machines. Instead of controlling the flow of electrons, we can control the flow of water to provide energy, remove waste products, and even perform calculations. An active component would be a place where water was added or accelerated, whereas a passive component might be a wheel turned by the water or a gate that redirects the water. But in electronics, with electrons as the medium, active components add energy and passive components modify existing signals.

Electronics: The Bigger Picture

In our exploration of electronics, we started at the atomic level with the fundamental properties of subatomic particles. We looked at emergent properties of collections of atoms, like the origins of chemical bonding and electronic behavior of materials. Recently we have started to move up in scale, seeing that individual circuit components affect the flow and storage of electrons in different ways. At this point I think it is worthwhile to take a step back and look at the larger picture. While individual electrons are governed by local interactions that minimize energy, we can figure out global rules for a circuit component that tell us how collections of electrons are affected by a resistor or some other building block, creating the macroscopic quantity we call current. From there we can create collections of circuit components that perform various operations on the current passing through them. These operations can again be combined, and where we may have started with a simple switch, we can end up with a computer or a display or a control circuit.

One way to picture it is like a complex canal system for water: we have a resource whose movement we want to manipulate, to extract physical work and perhaps perform calculations. At a small scale, we can inject dye into a bit of water and watch its progress through the system as it responds to local forces. But we can look at water currents at a larger scale by adding up the behavior of many small amounts of water. In fact, scale is a type of context, a lens through which a system can look quite different! Electrical engineers who design complex circuits for a living tend to work at a much higher level of abstraction than do scientists working on experimental electronic devices. The electrical engineers have to be able to imagine and simulate the function of impressive numbers of transistors, resistors, and other components, as shown below. Whereas a device physicist focuses on the detailed physics in a single circuit component, to learn what its best use might be. They are each working with the same system, but in different and complementary ways.

When I first started writing here, I talked about science as a lens through which we can view the world: a set of perspectives that let us see the things around us in a different way than we are used to. But there are lots of different worldviews and perspectives within science, depending on scale as well as other contexts. A discussion of electrical current, for example, could be handled quite differently depending on whether electrons are moving through a polar solvent like water, or synapses in the brain, or a metal wire connecting a capacitor to an inductor. Scientists who have trained in different fields like physics, chemistry, or biology can imagine very different contexts for discussions of the same phenomenon, so that even when the fundamental science is the same, the narrative and implications may change between contexts.

But in the end, whether you are a scientist or just interested in science, it helps to know not only that an electron is a tiny charged particle, but also how it behaves in electronic circuits, in chemical bonds between atoms, and in biological systems. And to know that it’s possible to build computers out of gears, billiard balls, or even crabs! But the size and properties of electronic computers have led them to dominate, at least for now.

The Many Roads from P-N Junctions to Transistors

When I called p-n junctions the building blocks of digital electronics, I was referring to their key role in building transistors. A transistor is another circuit element, but it is active, meaning it can add energy to a circuit, instead of passive like resistors, capacitors, or inductors which only store or dissipate charge. The transistor has an input where current enters the device and an output where current leaves, but also has a control electrode which can be used to modify the transistor’s function. Transistors can act as a switch, an amplifier, and can change the gain of a circuit (i.e. how many electrons come out compared to how many went in). So where did the transistor come from, and how do you build one?

The earliest devices which acted as transistors were called ‘triodes’, for their three inputs, and were made using vacuum tubes. A current could be transmitted from one electrode to the other, across the airless vacuum inside the tube. But applying a voltage to the third electrode induces an electric field which diverts the current, meaning that the third electrode can be used as a switch to turn current on and off. Triodes were in wide use for the first half of the twentieth century, and enabled many radio and telephone innovations, and in fact are still used in some specialty applications that require very high voltages. But they are quite fragile and consume a lot of power, which is part of what pushed researchers to find alternate ways to build a transistor.

Recall that the p-n junction acts as a diode, passing current in one direction but not the other. Two p-n junctions back to back, which could be n-p-n or p-n-p, will pass no current in any direction, because one of the junctions will always block the flow of charge. However, applying a voltage to the point where the p-n junctions are connected modifies the electric field, allowing current to pass. This kind of device is called a bipolar junction transistor (or BJT), because the p-n junction diodes respond differently to positive voltage than to negative voltage which means they are sensitive to the polarity of the current. (Remember all those times in Star Trek that they tried reversing the polarity? Maybe they had some diodes in backward!) The input of a bipolar junction transistor is called the collector, the output is called the emitter, and the region where voltage is applied to switch the device on is called the base. These are drawn as C, E, and B in the schematic shown below.

Bipolar Junction Transistor

Looking at the geometry of a bipolar junction transistor, you might notice that without the base region, the device is just a block of doped semiconductor which would be able to conduct current. What if there were a way to insert or remove a differently doped region to create junctions as needed? This can be done with a slightly different geometry, as shown below with the input now marked S for source, the output marked D for drain, and the control electrode marked G for gate. Applying a voltage to the gate electrode widens the depletion region at the p-n interface, which pinches off current by reducing the cross-section of p-type semiconductor available for conduction. This is effectively a p-n-p junction where the interfaces can be moved by adjusting the depletion region. Since it’s the electric field due to the gate that makes the channel wider or narrower, this device is called a junction field-effect transistor, or JFET.

Junction Field Effect Transistor

Both types of junction transistor were in widespread use in electronics from about 1945-1975. But another kind of transistor has since leapt to prominence. Inverting the logic that lead us to the junction field effect transistor, we can imagine a device geometry where an electric field applied by a gate actually creates the conducting region in a semiconductor, as in the schematic below. This device is called a metal-oxide-semiconductor field-effect transistor (or MOSFET), because the metal gate electrode is separated from the semiconductor channel by a thin oxide layer. Using the oxide as an insulator is pretty clever, because interfaces between silicon and its native oxide have very few places for electrons to get stuck, compared to the interfaces between silicon and other insulating materials. This means that the whole device, with oxide, p-type silicon, and n-type silicon, can be made in a silicon fabrication facility, many of which had already been built in the first few decades of the electronics era.

These two advantages over junction transistors gave MOSFETs a definite edge, but one final development has cemented their dominance. The combination of an n-channel MOSFET and a p-channel MOSFET together enable the creation of an extremely useful set of basic circuits. Devices built using pairs of one n-channel and one p-channel MOSFET working together are called CMOS, as shorthand for complementary metal-oxide-semiconductor, and have both lower power consumption and increased noise tolerance when compared to junction transistors. You might be asking, what are these super important circuits that CMOS is the best way of building? They are the circuits for digital logic, which we will devote a post to shortly!