Category Archives: Technology

Hotwiring the Brain

The most complex electrical device we possess isn’t in our pockets, it’s in our heads. Ever since Emil du Bois-Reymond’s discovery of electrical pulses in the brain and Santiago Ramón y Cajal’s realisation that the brain was composed of separate pieces, called neurons, we have known that the brain’s behaviour stems from its independent electrical parts. Many scientists are studying electronic implants that can affect how our brains think and learn. New research on conducting polymers that work well in the body may bring us one step closer to the ability to manually overhaul our own brains.

Ramón y Cajal’s drawings of two types of neurons in 1899.

Ramón y Cajal’s drawings of two types of neurons in 1899.

The immediate brain health implications of plugging electronics into the brain, even with a very basic level of control, would be astounding. The connections in the brain can adapt in response to their environment, forming the basis of learning. This ‘plasticity’ means the brain could adapt in response to implanted electronics, for example by connecting to prosthetic limbs and learning to control them. Implantable electrodes which can excite or inhibit neural signals could also be used for treatments of disorders stemming from bad neural patterns, such as epilepsy and Parkinson’s disease.Since the 1970s, brain-computer interfaces have been studied intensively. Passive electrodes which can record brain waves are already in widespread medical use. Invasive but accurate mapping of brain activity can be done by cutting the skull open, as neurosurgeons do during surgery to avoid tampering with important areas. Less invasive methods like electroencephalography (EEG) are helpful but more sensitive to noise and unable to distinguish different brain regions, not to mention individual neurons. More active interfaces have been built for artificial retinas and cochleas, though the challenge of connecting to the brain consistently and for a long time makes them a very different thing from our natural eyes and ears. But what if we could directly change the way the brain works, with direct electronic stimulation?

However, current neural electrodes made from metal cause problems when left in the brain long term. The body views foreign bodies in the brain as a problem and over time protective cells work to minimize their impact. This immune response not only damages the brain region around the electrode, it actually works to encapsulate the electrode, insulating it electrically from the brain and removing its purpose in being there.

These issues arise because of how hard and unyielding metal is compared to tissue, as well as the defense mechanisms in the body against impurities in metal. Hypoallergenic metals are used to combat this issue in piercings and jewelry, but the brain is yet more sensitive than the skin to invasive metals. A new approach being researched by scientists is the use of conducting polymers to either coat metal electrodes or to even comprise them, removing metal from the picture altogether.

Conducting polymers are plastics, which are more soft and mechanically similar to living tissue than metal. Additionally, they conduct ions (as do neurons in the brain) and are excellent at transducing these to electronic signals, giving high sensitivity to neural activity. Researchers at the École des Mines de Saint-Étienne in France have now demonstrated flexible, implantable electrodes which can be used to directly sense of stimulate brain activity in live rats, without the immune reaction plaguing metal electrodes.

It’s a big step from putting organic electronics in the brain and reading out activity to uploading ourselves to the cloud. But while scientists work on improving resolution in space and time in order to fully map a brain, there is already new hope for those suffering from neurodegenerative diseases, thanks to the plasticity of the brain and the conductivity of plastic.

Thermoelectrics and the Movement of Electrons

Often the big picture, like a river winding down a canyon or flashes of lightning in the sky, can be understood by looking at the small-scale behavior of each component, like molecules of water rushing to lower elevation or electrons seeking to equalize potential. I’ve always liked that approach to understanding nanoscale physics phenomena, like electricity or heat. But if you think about it, since electricity is the movement of charge carriers in response to an electric field, and hotter particles also move around more, shouldn’t there be some interaction between the two? Can something develop an electric current as a result of being hot? Or can driving electricity result in the generation of heat?

If you’ve ever felt an electronic device heat up in your hand or on your lap, then you know the answer is yes! Running an electric current through some materials, like resistors, generates waste heat, which is a big practical problem for electronics manufacturers. But there are some materials that can do the opposite, converting heat to electricity. Extracting usable electricity from waste heat is especially impressive when you think about the reduction in entropy involved; turning the high-entropy disordered heat back into an ordered electrical potential is a strong local reduction of entropy. This is how 90% of the electricity in the world is generated, but at a low efficiency. Materials where a temperature difference creates an electrical potential are called thermoelectrics, and in addition to being really practically important, thermoelectrics are a great illustration of how important it is to understand what’s happening at the nanoscale.

The most common thermoelectric device is one where two different metals are pressed together, creating a junction. Each metal is a conductor, and will have its own electrons, which can freely move across the junction. But the electrons experience different forces in each conductor: they may find it easier or more difficult to move through the material, based on the physical properties of the metal itself. So applying a voltage across the junction will affect the electrons in each material differently, and can cause one metal’s electrons to move faster than the other’s. This difference in electron speeds, or a difference in how easily the electrons transfer their energy to the atomic nuclei in the metal, or just slow diffusion of electrons across the junction, can all lead to a temperature difference between the two metals. Thus heat can be produced at the junction, and it can even be removed given the right material properties. Heat generation or removal at an electrical junction is called the Peltier effect, and is the basis of some nanoscale refrigerators and heat pumps.

Conversely, if a temperature difference already exists across a junction of two conductors, you can imagine the faster moving electrons in the hotter material, interacting with the slower moving electrons in the colder material at the junction. For the right combination of material properties, an electrical potential will be induced by the differing temperatures, which is called the Seebeck effect. But it’s the same mechanism as the Peltier effect above, namely that both heat and electric fields induced the movement of charge carriers, and so of course the two effects have some interaction with each other.

It’s not just metals that exhibit thermoelectric behavior, though. Semiconductors can also be used as thermoelectrics, and actually have a broader range of thermoelectric behavior because their carrier concentration varies more widely than that of metals. Heat and electric field affect the charge carriers in every material, it’s just that some materials have properties that result in a more interesting and usable phenomenon.

Thermoelectric materials can be used as heat pumps and refrigerators, as I mentioned above. But the thermoelectric effect can also be used to measure temperature, by putting two metals that react differently to temperature together and then measuring the induced electric potential. This is how thermocouples work, which are incredibly common. And it all comes from the fact that both heat and electricity cause motion at the nanoscale.

Nanowires, Memristors, and the Brain

I haven’t gone into much detail about what I work on here, though from the topics I pick to write about, you could probably guess that it involves electronics, materials science, and interesting technological applications. But a science communication student at my workplace made a short video about my project, which features me and a couple of my coworkers explaining what’s cool about nanowires, memristors, and the brain, all in less than four minutes. Check it out!

Facebook is evil or; Meme-sharing can be hazardous to your brain

Facebook is evil.

Now, hear me out. Some of the things Facebook does are great – allowing communication and connection across great physical distances foremost among them. However, Facebook also allows memes – “an idea, behavior or style that spreads from person to person within a culture” – to spread faster than ever before. The meme I am bemoaning today is the sharing of information without any review or fact-checking, hence allowing false and potentially harmful information to propogate across vast groups of people like wildfire.

The internet makes it very easy for sources to be lost – ask any artist or photographer who has come across their work being shared without any credit. Without embedded attribution or an easy way to trace where something came from a picture or statistic or quote can be repurposed to almost anything, and if we don’t question its veracity then all we’re doing is blindly swallowing down whatever the nameless, faceless denizens of the internet want us to believe. While it sounds rather dramatic to state it like that, I do think the inclination to take purported ‘news stories’ and other such information without questioning the source behind it to be unscientific and actively harmful to our critical thinking skills.

Sometimes all it takes is a few seconds to search for ‘horse shelter hoax’ on snopes.com to find the truth – a lot of the time people have already done the work for you! Sometimes it takes a bit more digging to uncover that what seems like a brilliant idea might not be but either way, wouldn’t you rather know the truth, rather than what a snappy headline or wittily-captioned picture tells you?

Think, people. Ask questions. Do your research. You don’t have to have a PhD to check sources, and you don’t need to call yourself a scientist to seek out the truth.

Don’t believe everything Facebook (and its millions of hangers-on) tells you. The internet has no obligation to tell you the truth, but anybody with a dedication to learning (and a snappy poster) can confirm that the truth is out there, if you look hard enough.

A Quick Introduction to Photonics

Last time when we talked about CCDs, we were concerned with how to take an optical signal, like an image, and convert it to an electronic signal. Then it can be processed, moved, and stored using electronics. But there is an obvious question this idea raises: why is the conversion to electronic signal needed? Why can’t we process the optical signal directly? Is there a way to manipulate a stream of photons that’s analogous to the way that electronic circuits manipulate streams of electrons?

The answer is yes, and the field dealing with optical signal processing is called photonics. In the same way that we can generate electronic signals and manipulate them, signals made up of light can be generated, shuffled around, and detected. While the underlying physical mechanisms are different from those in electronics, much of the same processing can take place! There are a lot of cool topics in photonics, but let’s go over some of the most basic technology just to get a sense for how it all works.

The most common way to generate optical signals in photonics is by using a laser diode, which is actually another application of the p-n junction. Applying a voltage across the junction itself causes electrons to drift into the junction from one side, while holes (which are oppositely charged) drift in from the other side. This “charge injection” results in a net current flow, but it also means that some electrons and holes will meet in the junction. When this happens, they can recombine if the electron falls into the empty electron state that the hole represents. But there is generally an energy difference between the free electron and free hole state, and this energy can then be emitted as a photon. This is how light with a specific energy is generated in the semiconductor laser diode, and when the junction is attached to an enclosed area to amplify that light, you get a very reliable light source that is easy to modulate in order to encode a signal.

But how do you send that signal anywhere else? Whereas electronic signals pass easily through metal wires, photonic signals are commercially transmitted through transparent optical fibers (hence the term “fiber optic”). Optical fibers take advantage of total internal reflection, a really cool phenomenon where for certain angles at an interface, all incident light is reflected off the interface. Since light is a quantized electromagnetic wave, how it moves through its surroundings depends on how easy it is to make the surrounding medium oscillate. Total internal reflection is a direct consequence of Snell’s Law, which describes how light changes when it goes between media that are not the same difficulty for light to pass through (the technical term for this is refractive index). So optical fibers consist of a fiber with high refractive index which is clad in a sheath with lower refractive index, tuned so that the inner fiber will exhibit total internal reflection for a specific wavelength of light. You can see an example of total internal reflection below, for light travelling through a plastic surrounded by air. When optical fibers exhibit total internal reflection, they can transmit photonic signals over long distances, with less loss than an electronic signal moving through a long wire would experience, as well as less susceptibility to stray electromagnetic fields.

Photonic signals can then be turned back into electronic signals using semiconducting photodetectors, which take advantage of the photoelectric effect. This technology is the basis of most modern wired telecommunications, including the Internet!

But if you are remembering all the electronic components, like resistors and capacitors and transistors, which we use to manipulate electronic signals, you may be wondering what the corresponding parts are for photonics. There are photonic crystals, which have microstructure that affects the passage of light, of which opal is a naturally occurring example! And photonic signals can be recorded and later read out on optical media like CDs and DVDs. But in general, the commercial possibilities of optical data transmission have outweighed those of complex photonic signal analysis. That’s why our network infrastructure is photonic but our computers, for now, are electronic. However, there are lots of researchers working in this area, so that could change, and that also means that if you find photonics interesting there is much more to read!

The Electronic Eye: Charge-Coupled Devices

Now that we’ve looked at some of the basic interactions between electrons and light, we can turn our focus to electronic photodetectors: devices that can sense light and respond by producing a current. A simple example is semiconductor photomultipliers, like we talked about last time, which are able to very sensitively measure the intensity of light that impacts the photomultiplier.

But what do we do if we want to record an image: a two-dimensional map of the intensity of incident light? In traditional photography,  silver halide crystals on photographic film interact with incident photons, then the chemical development process causes the altered crystals to darken. Since semiconductors generate a number of electrons proportional to the number of incident photons, you might think it would be easy to develop a similar digital process. But the major issue for digital imaging was not so much sensitivity as signal processing: if you have a thin film which is electrically responding to light, how do you read out the resultant electronic signal without losing spatial resolution?

Because of these difficulties, early electronic imaging was done using vacuum tubes, a bulky but effective technology we’ve discussed several times before. Many researchers were looking for a practical means of imaging with semiconductors, but the major breakthrough came in 1969, when Boyle and Smith had the basic idea for structuring a semiconductor imaging device in what’s now called the charge-coupled device (CCD).

To retain spatial resolution in an image, the photoresponsive semiconductor in a CCD is divided into an array of capacitors, each of which can store some amount of charge. One way to picture it is as an array of wells, where a rainstorm can dump a finite amount of water into any wells under it, and that water remains separate from the water in neighboring wells. But in a CCD, as photons enter the semiconductor and generate electrons, electrodes at different voltages create potential wells to trap the electrons. These wells define what we call pixels: the smallest possible subdivisions of the image.

However, to be able to make an image we also need to be able to measure how much has accumulated in each well. In our device, this means moving the electrons to an amplifier. But how can we transfer those wells of electrons without letting them mix with each other (which would blur the image) or leak out of the device altogether? To accomplish this, we need the wells confining the electrons to be mobile! But remember that the wells themselves are defined by applying voltages to a patterned array of electrodes. This means moving the well is possible, by lowering the potential directly in front of a well and raising the potential directly behind it. This idea is illustrated below for a system with three coupled potential arrays, a silicon oxide insulating layer, and an active region of p-doped silicon.

You can imagine that, instead of our previous array of wells to map rainfall, we have an array of buckets and a brigade of volunteers to pass the buckets to a measurement point. The principle is sometimes called a bucket brigade, and the general method of moving electronic outputs forward is termed a shift register. The physical implementation in CCDs, using voltages which are cycling high and low, is called clocking.

In general, the charge in a CCD will be transferred down the columns of the detector and then read out row by row by an electronic amplifier, which converts charge to voltage. Since this is a serial process, if the efficiency of transferring charge from one pixel to the next is 99%, then after moving through 100 pixels only 36% of the electrons will be left! So for a 10 megapixel camera, where the charge may pass through as many as 6,000 pixels before being measured, the charge transfer efficiency has to be more like 99.9999%! Historically, this was first achieved by cooling the CCDs using liquid nitrogen to reduce thermal noise, a practical approach for detectors on spacecraft but one that initially limited commercial applications. But eventually CCDs were made with decent transfer efficiency at room temperature, and this has been the main technological factor behind the development of digital photography. CCDs themselves don’t distinguish between different colors of photons, but color filters can be placed over different pixels to create a red channel, a green channel, and a blue channel that are recombined to make a color image. CCDs are the image sensors in all of our digital cameras and most of our phone cameras, and it was partly for this enormous technological impact that Boyle and Smith received the Nobel Prize in 2009.

There are a lot more cool details about the function of CCDs over at the wikipedia page, and many researchers are still finding ways to improve CCDs!

Plasma Displays and Plasma Physics

You may have noticed one big technology missing from my recent post on how displays work: I didn’t talk about plasma displays! I wanted to have more space to discuss what plasma actually is before getting into the detail of how the displays work, and that’s what today’s post is about.

Plasmas are usually considered a state of matter. But whereas order and density distinguish the other states of matter from each other—solids are dense and ordered, liquid are dense and disordered, and gases are disperse and disordered—for plasma there is another factor that is important. Plasmas are disperse and disordered like gases, but they are also ionized. Whereas a non-ionized gas consists of atoms, in an ionized gas the negatively charged electrons have been stripped from the positively charged atomic nuclei and both are moving freely through the gas. The electrons and nuclei are both called ions, to indicate that they carry a charge. Remembering the attractive force that oppositely charged particles experience, it might seem like a plasma would be pretty short-lived! Electrons and nuclei form stable atoms together because that is a low-energy configuration, which means it’s very appealing for the plasma to recombine into regular atoms. And in fact that’s what happens if you let it cool down, but if you keep the plasma temperature high, the ions are more likely to stay separated. In fact, how many of the atoms are ionized depends roughly on the plasma temperature. Hotter plasmas often have nearly all of their atoms broken apart and ionized, whereas cooler plasmas may be only partly ionized. But the more ions you have, the more electromagnetic interactions occur within the plasma because of all the free charge, and this is what makes plasmas behave differently from non-ionized gases.

A hot gas of ions may sound somewhat removed from the quotidian phases of solid, liquid, and gas. But actually, plasma is the most common phase of luminous matter in the universe, prevalent both in stars and in the interstellar medium. (I say luminous matter here to distinguish from dark matter, which seems to make up more total mass than the matter we can see, and whose phase and nature are both unknown.) There are also lots of examples of plasmas here on Earth, such as lightning bolts, the northern lights, and the neon that lights up a neon sign. You may have noticed that these are all light-emitting phenomena;  the high energy of the ions means that they have many lower energy states available to jump to, and those energy level changes often involve emitting a photon to produce visible light.

So how can plasma be controlled to make a display? Illumination comes from tiny pockets of gas that can be excited into a plasma by applying an electrical current, and each pixel is defined by a separate plasma cell. For monochrome displays, the gas can be something like neon which emits light that we can see. But to create multiple colors of light, various phosphors are painted in front of the plasma cells. The phosphors absorb the light emitted by the plasma and emit their own light in a variety of colors (this is also how color CRT displays work). Plasma displays tend to have better contrast than LCDs and less dependence on viewing angle, but they also consume quite a bit of energy as you might expect from thinking about keeping the ions in the plasma separated.

There are a lot of other cool things about plasmas, like how they can be contained by electromagnetic fields and how they are used in modern industrial processing to etch semiconductors and grow nanomaterials. But for further reading I definitely recommend the wikipedia article on plasmas.

That’s Edutainment? – the fine line between catching their attention and losing the plot

Edutainment can be a dirty word, depending on who you ask. While the blending of entertainment and education seems to be a positive step – why wouldn’t you want people to enjoy learning? – it can be viewed with distaste by those who consider themselves ‘real’ educators. Very few of them would claim that their own flavours of educational work are without enjoyment, so why this disconnect?

Edutainment centres around the idea that all learning should be ‘fun’, and often incorporates technology including video games, films and radio, often with an interactive focus. It’s informal and revolves less around a central teacher figure and more around the engagement of the student with a narrative designed to impart information while it also engages with them emotionally. And while edutainment principles can be used to deliver any curriculum, it often seems to be focused on making science fun and enjoyable for otherwise unengaged students.

The benefits of this type of engagement are myriad: it addresses the challenge of catching and keeping peoples’ attention, it can make traditionally ‘boring’ or ‘difficult’ subjects more engaging, and it does so in a way that requires little in the way of preparation or resources for those delivering it (after initial development, of course). So what objections could possibly be lodged against it?

One of the main criticisms seems to boil down to the fact that in ‘making learning fun’ traditional learning is relegated to the ‘unfun’ sector; it becomes an obstacle to cover up or transmute into something palatable and shiny. While scientists and engineers hopefully enjoy their jobs and are passionate about their subjects it’s doubtful than any given individual would agree that their progression and learning had been all fun, all the time. Should we be encouraging young people to engage only with that which tells them a nice story or catches their eye with well-designed graphics, or should we level with them that STEM isn’t always fun? Facts need to be learned, abstract theories must be understood, and above all critical thinking and hard work and dedication are key to becoming successful learners and contributors to the wider sphere of understanding.

That being said, as educators we should be willing to see our methods evolve and change with time. Edutaintment activities can have value as long as they are vetted and evaluated for impact (both on attitude and uptake of content) properly. Some developing technologies are especially exciting – for example, http://www.teachwithportals.com/ is a free-for-schools initiave launched by distribution platform Steam, allowing teachers to use the Puzzle Maker and other templates to explore physics, maths, technology and even literacy topics while students play their way through derivatives of a wildly popular video game. This will not be a solution for all – there comes the problem of timetable integration, of whether school computers are capable of running the appropriate software, of the training of teachers to make use of yet another new tool – but it might work for some. And if it doesn’t, well, there’s always the classics:

Image

Electronic Display Basics

As the capabilities of computers grew, it became more and more important to have an easy means of interacting with them: providing inputs and seeing outputs. Paper was used to this end for many years, but another technology already existed in television and was co-opted for computer displays: the cathode ray tube.

Cathode ray tubes (usually abbreviated CRTs) are pretty similar in construction to the vacuum tube, the early logic switch component. Inside a glass tube which has had the air removed, we have a filament that can emit electrons when heated. But instead of the electron collector in the vacuum tube, there is a screen which immediately emits light when struck by electrons, a behavior called fluorescence.  Now, in order to create a two-dimensional image, you need a very focused electron beam, as well as some way to move the electron beam across the screen. You can use an electric field for both these tasks, to create a focused beam that can be scanned around the screen in order to light up different sections. But the larger the screen, the deeper the display has to be to give the beam enough room to scan the screen with reasonable resolution. This accounts for the large size of CRT monitors, which you either remember or occasionally spot in movies with computers between about 1980 and 2000.

The main display technology that displaced the CRT is based on a pretty amazing material called liquid crystals (and you may recognize the acronym LCD, which stands for liquid crystal display). The name comes from the fact that if you create a collection of small crystals that have one axis significantly longer than the others, this assembly of crystals acts like a hybrid of two phases of matter, liquids and elongated solid crystals. The crystals themselves can move around and are not fixed in one place, but the orientation of one crystal can affect those around it, and it’s possible to have liquid crystal assemblies with various degrees of ordering. The key to using liquid crystals in a display is controlling their orientation in order to control what orientations of light can pass through them. Using an external electric field, we can either align the crystals with the direction of motion for specific kinds of light passing through, which makes the liquid crystal cell transparent to that light, or we can orient them perpendicular to the direction of motion, which causes them to absorb the light so that none is transmitted. If you have an array of light emitting diodes, with each diode covered by a liquid crystal cell, then you’ve created a basic liquid crystal display.

Now, one thing to note about both the above technologies is that they are emissive: they generate light, so they can be read in a darkened room, used as a flashlight or as a source of eyestrain. But this also means that whenever an emissive display is switched on, it’s consuming power, and when it’s switched off there is nothing displayed. Contrast this with that ancient display technology, paper, which is reflective: it can only be read with enough ambient light bouncing off its surface, and once created it costs nothing to display its content continuously. Of course, you can’t rewrite what’s on paper very easily, so you end up using a lot of it. Additionally, the same natural resources we use for paper are in high demand for many other things, and transporting a library as a huge collection of papers is quite onerous, and for these and many other reasons the move to electronic displays is thus pretty strongly motivated. But, is there a way to make a reflective electronic display?

If you have ever used a Kindle, or any other device with an e-ink display, then you know the answer is yes. The surface of an e-ink display has millions of tiny cells, which contain miniscule beads that are either white and positively charged, or black and negatively charged. An electric field can be applied to each cell, bringing either white or black beads to its surface. Light coming into the surface from outside sources is then either reflected off the white beads or absorbed by the black beads, creating a rewritable reflective display.

Of course, paper is still the cheapest display technology by far.  But in some cases, for example when you want to make small corrections in content, display something outside, ship information somewhere, or carry an entire library on something the size of a pad of paper, newer inventions are getting the edge on paper.

What is a memristor?

One of the most hyped devices of the past few years has been the memristor, and in comparison to other circuit elements I think you’ll agree it’s pretty weird, and interesting!

The existence of the memristor was first predicted in 1971 by Leon Chua. The behavior of basic circuit elements had long been described mathematically, with each circuit element having a given relationship between two of four quantities: charge (q), current (I), voltage (V), and flux (φ). Chua noticed that the mathematical equations could be tabulated and had a symmetry of sorts, but that one equation seemed to be missing, for a device that related current to a change in magnetic flux.  The behavior of such a device would change depending on how much current had passed through it, and for this reason Chua called it a ‘memristor’, a contraction of ‘memory’ and ‘resistor’. You can see the mathematical relationships represented below as the edges of a tetrahedron. Resistor behavior is quantified by the resistance R and capacitor behavior is quantified by the capacitance C, in the two equations near the top. And on either side, we have relations for the inductance L and the memristance M. It’s not crucial to understand these equations intimately, just to see that they have a certain symmetry and completeness to them as a set of relations between these four key variables. Five of these relationships had been experimentally observed in devices, and mathematically suggested the sixth, the memristance equation on the right. But having the equation doesn’t tell you how to make a device that will exhibit that behavior!

Chua’s initial proposals to physically create a memristor used external power to store the remembered information, making the component active rather than passive. However, in 2008 a real-world memristor was created using a nanoscale film with embedded free charge that could be moved by applying an electric field that exerts a force on the charge. How much of the film contains the extra charge determines how much of the device has high resistance and low resistance, which causes the total resistance to depend on how much current has passed through.

This isn’t the only implementation of a memristor available, because as many researchers realized once the first results were announced, memory of past measurements is a common nanoscale feature. Current flow can often cause small changes in materials, but while these changes may not be noticeable in the properties of a bulk material, when the material has very small thickness or feature size, the changes can affect material properties in a measurable way. Since this constitutes a form of memory that lasts for a significant amount of time, and there is a large market for non-volatile electronic memory for computers, the commercial interest in these devices has been considerable. HP expects to have their version memristor-based computer memory on the market by 2014, and it remains to be seen what other novel electronics may come from the memristor.