Category Archives: Technology

Hotwiring the Brain

The most complex electrical device we possess isn’t in our pockets, it’s in our heads. Ever since Emil du Bois-Reymond’s discovery of electrical pulses in the brain and Santiago Ramón y Cajal’s realisation that the brain was composed of separate pieces, called neurons, we have known that the brain’s behaviour stems from its independent electrical parts. Many scientists are studying electronic implants that can affect how our brains think and learn. New research on conducting polymers that work well in the body may bring us one step closer to the ability to manually overhaul our own brains.

Ramón y Cajal’s drawings of two types of neurons in 1899.

Ramón y Cajal’s drawings of two types of neurons in 1899.

The immediate brain health implications of plugging electronics into the brain, even with a very basic level of control, would be astounding. The connections in the brain can adapt in response to their environment, forming the basis of learning. This ‘plasticity’ means the brain could adapt in response to implanted electronics, for example by connecting to prosthetic limbs and learning to control them. Implantable electrodes which can excite or inhibit neural signals could also be used for treatments of disorders stemming from bad neural patterns, such as epilepsy and Parkinson’s disease.Since the 1970s, brain-computer interfaces have been studied intensively. Passive electrodes which can record brain waves are already in widespread medical use. Invasive but accurate mapping of brain activity can be done by cutting the skull open, as neurosurgeons do during surgery to avoid tampering with important areas. Less invasive methods like electroencephalography (EEG) are helpful but more sensitive to noise and unable to distinguish different brain regions, not to mention individual neurons. More active interfaces have been built for artificial retinas and cochleas, though the challenge of connecting to the brain consistently and for a long time makes them a very different thing from our natural eyes and ears. But what if we could directly change the way the brain works, with direct electronic stimulation?

However, current neural electrodes made from metal cause problems when left in the brain long term. The body views foreign bodies in the brain as a problem and over time protective cells work to minimize their impact. This immune response not only damages the brain region around the electrode, it actually works to encapsulate the electrode, insulating it electrically from the brain and removing its purpose in being there.

These issues arise because of how hard and unyielding metal is compared to tissue, as well as the defense mechanisms in the body against impurities in metal. Hypoallergenic metals are used to combat this issue in piercings and jewelry, but the brain is yet more sensitive than the skin to invasive metals. A new approach being researched by scientists is the use of conducting polymers to either coat metal electrodes or to even comprise them, removing metal from the picture altogether.

Conducting polymers are plastics, which are more soft and mechanically similar to living tissue than metal. Additionally, they conduct ions (as do neurons in the brain) and are excellent at transducing these to electronic signals, giving high sensitivity to neural activity. Researchers at the École des Mines de Saint-Étienne in France have now demonstrated flexible, implantable electrodes which can be used to directly sense of stimulate brain activity in live rats, without the immune reaction plaguing metal electrodes.

It’s a big step from putting organic electronics in the brain and reading out activity to uploading ourselves to the cloud. But while scientists work on improving resolution in space and time in order to fully map a brain, there is already new hope for those suffering from neurodegenerative diseases, thanks to the plasticity of the brain and the conductivity of plastic.

Thermoelectrics and the Movement of Electrons

Often the big picture, like a river winding down a canyon or flashes of lightning in the sky, can be understood by looking at the small-scale behavior of each component, like molecules of water rushing to lower elevation or electrons seeking to equalize potential. I’ve always liked that approach to understanding nanoscale physics phenomena, like electricity or heat. But if you think about it, since electricity is the movement of charge carriers in response to an electric field, and hotter particles also move around more, shouldn’t there be some interaction between the two? Can something develop an electric current as a result of being hot? Or can driving electricity result in the generation of heat?

If you’ve ever felt an electronic device heat up in your hand or on your lap, then you know the answer is yes! Running an electric current through some materials, like resistors, generates waste heat, which is a big practical problem for electronics manufacturers. But there are some materials that can do the opposite, converting heat to electricity. Extracting usable electricity from waste heat is especially impressive when you think about the reduction in entropy involved; turning the high-entropy disordered heat back into an ordered electrical potential is a strong local reduction of entropy. This is how 90% of the electricity in the world is generated, but at a low efficiency. Materials where a temperature difference creates an electrical potential are called thermoelectrics, and in addition to being really practically important, thermoelectrics are a great illustration of how important it is to understand what’s happening at the nanoscale.

The most common thermoelectric device is one where two different metals are pressed together, creating a junction. Each metal is a conductor, and will have its own electrons, which can freely move across the junction. But the electrons experience different forces in each conductor: they may find it easier or more difficult to move through the material, based on the physical properties of the metal itself. So applying a voltage across the junction will affect the electrons in each material differently, and can cause one metal’s electrons to move faster than the other’s. This difference in electron speeds, or a difference in how easily the electrons transfer their energy to the atomic nuclei in the metal, or just slow diffusion of electrons across the junction, can all lead to a temperature difference between the two metals. Thus heat can be produced at the junction, and it can even be removed given the right material properties. Heat generation or removal at an electrical junction is called the Peltier effect, and is the basis of some nanoscale refrigerators and heat pumps.

Conversely, if a temperature difference already exists across a junction of two conductors, you can imagine the faster moving electrons in the hotter material, interacting with the slower moving electrons in the colder material at the junction. For the right combination of material properties, an electrical potential will be induced by the differing temperatures, which is called the Seebeck effect. But it’s the same mechanism as the Peltier effect above, namely that both heat and electric fields induced the movement of charge carriers, and so of course the two effects have some interaction with each other.

It’s not just metals that exhibit thermoelectric behavior, though. Semiconductors can also be used as thermoelectrics, and actually have a broader range of thermoelectric behavior because their carrier concentration varies more widely than that of metals. Heat and electric field affect the charge carriers in every material, it’s just that some materials have properties that result in a more interesting and usable phenomenon.

Thermoelectric materials can be used as heat pumps and refrigerators, as I mentioned above. But the thermoelectric effect can also be used to measure temperature, by putting two metals that react differently to temperature together and then measuring the induced electric potential. This is how thermocouples work, which are incredibly common. And it all comes from the fact that both heat and electricity cause motion at the nanoscale.

Nanowires, Memristors, and the Brain

I haven’t gone into much detail about what I work on here, though from the topics I pick to write about, you could probably guess that it involves electronics, materials science, and interesting technological applications. But a science communication student at my workplace made a short video about my project, which features me and a couple of my coworkers explaining what’s cool about nanowires, memristors, and the brain, all in less than four minutes. Check it out!

Facebook is evil or; Meme-sharing can be hazardous to your brain

Facebook is evil.

Now, hear me out. Some of the things Facebook does are great – allowing communication and connection across great physical distances foremost among them. However, Facebook also allows memes – “an idea, behavior or style that spreads from person to person within a culture” – to spread faster than ever before. The meme I am bemoaning today is the sharing of information without any review or fact-checking, hence allowing false and potentially harmful information to propogate across vast groups of people like wildfire.

The internet makes it very easy for sources to be lost – ask any artist or photographer who has come across their work being shared without any credit. Without embedded attribution or an easy way to trace where something came from a picture or statistic or quote can be repurposed to almost anything, and if we don’t question its veracity then all we’re doing is blindly swallowing down whatever the nameless, faceless denizens of the internet want us to believe. While it sounds rather dramatic to state it like that, I do think the inclination to take purported ‘news stories’ and other such information without questioning the source behind it to be unscientific and actively harmful to our critical thinking skills.

Sometimes all it takes is a few seconds to search for ‘horse shelter hoax’ on to find the truth – a lot of the time people have already done the work for you! Sometimes it takes a bit more digging to uncover that what seems like a brilliant idea might not be but either way, wouldn’t you rather know the truth, rather than what a snappy headline or wittily-captioned picture tells you?

Think, people. Ask questions. Do your research. You don’t have to have a PhD to check sources, and you don’t need to call yourself a scientist to seek out the truth.

Don’t believe everything Facebook (and its millions of hangers-on) tells you. The internet has no obligation to tell you the truth, but anybody with a dedication to learning (and a snappy poster) can confirm that the truth is out there, if you look hard enough.

A Quick Introduction to Photonics

Last time when we talked about CCDs, we were concerned with how to take an optical signal, like an image, and convert it to an electronic signal. Then it can be processed, moved, and stored using electronics. But there is an obvious question this idea raises: why is the conversion to electronic signal needed? Why can’t we process the optical signal directly? Is there a way to manipulate a stream of photons that’s analogous to the way that electronic circuits manipulate streams of electrons?

The answer is yes, and the field dealing with optical signal processing is called photonics. In the same way that we can generate electronic signals and manipulate them, signals made up of light can be generated, shuffled around, and detected. While the underlying physical mechanisms are different from those in electronics, much of the same processing can take place! There are a lot of cool topics in photonics, but let’s go over some of the most basic technology just to get a sense for how it all works.

The most common way to generate optical signals in photonics is by using a laser diode, which is actually another application of the p-n junction. Applying a voltage across the junction itself causes electrons to drift into the junction from one side, while holes (which are oppositely charged) drift in from the other side. This “charge injection” results in a net current flow, but it also means that some electrons and holes will meet in the junction. When this happens, they can recombine if the electron falls into the empty electron state that the hole represents. But there is generally an energy difference between the free electron and free hole state, and this energy can then be emitted as a photon. This is how light with a specific energy is generated in the semiconductor laser diode, and when the junction is attached to an enclosed area to amplify that light, you get a very reliable light source that is easy to modulate in order to encode a signal.

But how do you send that signal anywhere else? Whereas electronic signals pass easily through metal wires, photonic signals are commercially transmitted through transparent optical fibers (hence the term “fiber optic”). Optical fibers take advantage of total internal reflection, a really cool phenomenon where for certain angles at an interface, all incident light is reflected off the interface. Since light is a quantized electromagnetic wave, how it moves through its surroundings depends on how easy it is to make the surrounding medium oscillate. Total internal reflection is a direct consequence of Snell’s Law, which describes how light changes when it goes between media that are not the same difficulty for light to pass through (the technical term for this is refractive index). So optical fibers consist of a fiber with high refractive index which is clad in a sheath with lower refractive index, tuned so that the inner fiber will exhibit total internal reflection for a specific wavelength of light. You can see an example of total internal reflection below, for light travelling through a plastic surrounded by air. When optical fibers exhibit total internal reflection, they can transmit photonic signals over long distances, with less loss than an electronic signal moving through a long wire would experience, as well as less susceptibility to stray electromagnetic fields.

Photonic signals can then be turned back into electronic signals using semiconducting photodetectors, which take advantage of the photoelectric effect. This technology is the basis of most modern wired telecommunications, including the Internet!

But if you are remembering all the electronic components, like resistors and capacitors and transistors, which we use to manipulate electronic signals, you may be wondering what the corresponding parts are for photonics. There are photonic crystals, which have microstructure that affects the passage of light, of which opal is a naturally occurring example! And photonic signals can be recorded and later read out on optical media like CDs and DVDs. But in general, the commercial possibilities of optical data transmission have outweighed those of complex photonic signal analysis. That’s why our network infrastructure is photonic but our computers, for now, are electronic. However, there are lots of researchers working in this area, so that could change, and that also means that if you find photonics interesting there is much more to read!

The Electronic Eye: Charge-Coupled Devices

Now that we’ve looked at some of the basic interactions between electrons and light, we can turn our focus to electronic photodetectors: devices that can sense light and respond by producing a current. A simple example is semiconductor photomultipliers, like we talked about last time, which are able to very sensitively measure the intensity of light that impacts the photomultiplier.

But what do we do if we want to record an image: a two-dimensional map of the intensity of incident light? In traditional photography,  silver halide crystals on photographic film interact with incident photons, then the chemical development process causes the altered crystals to darken. Since semiconductors generate a number of electrons proportional to the number of incident photons, you might think it would be easy to develop a similar digital process. But the major issue for digital imaging was not so much sensitivity as signal processing: if you have a thin film which is electrically responding to light, how do you read out the resultant electronic signal without losing spatial resolution?

Because of these difficulties, early electronic imaging was done using vacuum tubes, a bulky but effective technology we’ve discussed several times before. Many researchers were looking for a practical means of imaging with semiconductors, but the major breakthrough came in 1969, when Boyle and Smith had the basic idea for structuring a semiconductor imaging device in what’s now called the charge-coupled device (CCD).

To retain spatial resolution in an image, the photoresponsive semiconductor in a CCD is divided into an array of capacitors, each of which can store some amount of charge. One way to picture it is as an array of wells, where a rainstorm can dump a finite amount of water into any wells under it, and that water remains separate from the water in neighboring wells. But in a CCD, as photons enter the semiconductor and generate electrons, electrodes at different voltages create potential wells to trap the electrons. These wells define what we call pixels: the smallest possible subdivisions of the image.

However, to be able to make an image we also need to be able to measure how much has accumulated in each well. In our device, this means moving the electrons to an amplifier. But how can we transfer those wells of electrons without letting them mix with each other (which would blur the image) or leak out of the device altogether? To accomplish this, we need the wells confining the electrons to be mobile! But remember that the wells themselves are defined by applying voltages to a patterned array of electrodes. This means moving the well is possible, by lowering the potential directly in front of a well and raising the potential directly behind it. This idea is illustrated below for a system with three coupled potential arrays, a silicon oxide insulating layer, and an active region of p-doped silicon.

You can imagine that, instead of our previous array of wells to map rainfall, we have an array of buckets and a brigade of volunteers to pass the buckets to a measurement point. The principle is sometimes called a bucket brigade, and the general method of moving electronic outputs forward is termed a shift register. The physical implementation in CCDs, using voltages which are cycling high and low, is called clocking.

In general, the charge in a CCD will be transferred down the columns of the detector and then read out row by row by an electronic amplifier, which converts charge to voltage. Since this is a serial process, if the efficiency of transferring charge from one pixel to the next is 99%, then after moving through 100 pixels only 36% of the electrons will be left! So for a 10 megapixel camera, where the charge may pass through as many as 6,000 pixels before being measured, the charge transfer efficiency has to be more like 99.9999%! Historically, this was first achieved by cooling the CCDs using liquid nitrogen to reduce thermal noise, a practical approach for detectors on spacecraft but one that initially limited commercial applications. But eventually CCDs were made with decent transfer efficiency at room temperature, and this has been the main technological factor behind the development of digital photography. CCDs themselves don’t distinguish between different colors of photons, but color filters can be placed over different pixels to create a red channel, a green channel, and a blue channel that are recombined to make a color image. CCDs are the image sensors in all of our digital cameras and most of our phone cameras, and it was partly for this enormous technological impact that Boyle and Smith received the Nobel Prize in 2009.

There are a lot more cool details about the function of CCDs over at the wikipedia page, and many researchers are still finding ways to improve CCDs!

Plasma Displays and Plasma Physics

You may have noticed one big technology missing from my recent post on how displays work: I didn’t talk about plasma displays! I wanted to have more space to discuss what plasma actually is before getting into the detail of how the displays work, and that’s what today’s post is about.

Plasmas are usually considered a state of matter. But whereas order and density distinguish the other states of matter from each other—solids are dense and ordered, liquid are dense and disordered, and gases are disperse and disordered—for plasma there is another factor that is important. Plasmas are disperse and disordered like gases, but they are also ionized. Whereas a non-ionized gas consists of atoms, in an ionized gas the negatively charged electrons have been stripped from the positively charged atomic nuclei and both are moving freely through the gas. The electrons and nuclei are both called ions, to indicate that they carry a charge. Remembering the attractive force that oppositely charged particles experience, it might seem like a plasma would be pretty short-lived! Electrons and nuclei form stable atoms together because that is a low-energy configuration, which means it’s very appealing for the plasma to recombine into regular atoms. And in fact that’s what happens if you let it cool down, but if you keep the plasma temperature high, the ions are more likely to stay separated. In fact, how many of the atoms are ionized depends roughly on the plasma temperature. Hotter plasmas often have nearly all of their atoms broken apart and ionized, whereas cooler plasmas may be only partly ionized. But the more ions you have, the more electromagnetic interactions occur within the plasma because of all the free charge, and this is what makes plasmas behave differently from non-ionized gases.

A hot gas of ions may sound somewhat removed from the quotidian phases of solid, liquid, and gas. But actually, plasma is the most common phase of luminous matter in the universe, prevalent both in stars and in the interstellar medium. (I say luminous matter here to distinguish from dark matter, which seems to make up more total mass than the matter we can see, and whose phase and nature are both unknown.) There are also lots of examples of plasmas here on Earth, such as lightning bolts, the northern lights, and the neon that lights up a neon sign. You may have noticed that these are all light-emitting phenomena;  the high energy of the ions means that they have many lower energy states available to jump to, and those energy level changes often involve emitting a photon to produce visible light.

So how can plasma be controlled to make a display? Illumination comes from tiny pockets of gas that can be excited into a plasma by applying an electrical current, and each pixel is defined by a separate plasma cell. For monochrome displays, the gas can be something like neon which emits light that we can see. But to create multiple colors of light, various phosphors are painted in front of the plasma cells. The phosphors absorb the light emitted by the plasma and emit their own light in a variety of colors (this is also how color CRT displays work). Plasma displays tend to have better contrast than LCDs and less dependence on viewing angle, but they also consume quite a bit of energy as you might expect from thinking about keeping the ions in the plasma separated.

There are a lot of other cool things about plasmas, like how they can be contained by electromagnetic fields and how they are used in modern industrial processing to etch semiconductors and grow nanomaterials. But for further reading I definitely recommend the wikipedia article on plasmas.