Monthly Archives: October 2012

The Electronic Eye: Charge-Coupled Devices

Now that we’ve looked at some of the basic interactions between electrons and light, we can turn our focus to electronic photodetectors: devices that can sense light and respond by producing a current. A simple example is semiconductor photomultipliers, like we talked about last time, which are able to very sensitively measure the intensity of light that impacts the photomultiplier.

But what do we do if we want to record an image: a two-dimensional map of the intensity of incident light? In traditional photography,  silver halide crystals on photographic film interact with incident photons, then the chemical development process causes the altered crystals to darken. Since semiconductors generate a number of electrons proportional to the number of incident photons, you might think it would be easy to develop a similar digital process. But the major issue for digital imaging was not so much sensitivity as signal processing: if you have a thin film which is electrically responding to light, how do you read out the resultant electronic signal without losing spatial resolution?

Because of these difficulties, early electronic imaging was done using vacuum tubes, a bulky but effective technology we’ve discussed several times before. Many researchers were looking for a practical means of imaging with semiconductors, but the major breakthrough came in 1969, when Boyle and Smith had the basic idea for structuring a semiconductor imaging device in what’s now called the charge-coupled device (CCD).

To retain spatial resolution in an image, the photoresponsive semiconductor in a CCD is divided into an array of capacitors, each of which can store some amount of charge. One way to picture it is as an array of wells, where a rainstorm can dump a finite amount of water into any wells under it, and that water remains separate from the water in neighboring wells. But in a CCD, as photons enter the semiconductor and generate electrons, electrodes at different voltages create potential wells to trap the electrons. These wells define what we call pixels: the smallest possible subdivisions of the image.

However, to be able to make an image we also need to be able to measure how much has accumulated in each well. In our device, this means moving the electrons to an amplifier. But how can we transfer those wells of electrons without letting them mix with each other (which would blur the image) or leak out of the device altogether? To accomplish this, we need the wells confining the electrons to be mobile! But remember that the wells themselves are defined by applying voltages to a patterned array of electrodes. This means moving the well is possible, by lowering the potential directly in front of a well and raising the potential directly behind it. This idea is illustrated below for a system with three coupled potential arrays, a silicon oxide insulating layer, and an active region of p-doped silicon.

You can imagine that, instead of our previous array of wells to map rainfall, we have an array of buckets and a brigade of volunteers to pass the buckets to a measurement point. The principle is sometimes called a bucket brigade, and the general method of moving electronic outputs forward is termed a shift register. The physical implementation in CCDs, using voltages which are cycling high and low, is called clocking.

In general, the charge in a CCD will be transferred down the columns of the detector and then read out row by row by an electronic amplifier, which converts charge to voltage. Since this is a serial process, if the efficiency of transferring charge from one pixel to the next is 99%, then after moving through 100 pixels only 36% of the electrons will be left! So for a 10 megapixel camera, where the charge may pass through as many as 6,000 pixels before being measured, the charge transfer efficiency has to be more like 99.9999%! Historically, this was first achieved by cooling the CCDs using liquid nitrogen to reduce thermal noise, a practical approach for detectors on spacecraft but one that initially limited commercial applications. But eventually CCDs were made with decent transfer efficiency at room temperature, and this has been the main technological factor behind the development of digital photography. CCDs themselves don’t distinguish between different colors of photons, but color filters can be placed over different pixels to create a red channel, a green channel, and a blue channel that are recombined to make a color image. CCDs are the image sensors in all of our digital cameras and most of our phone cameras, and it was partly for this enormous technological impact that Boyle and Smith received the Nobel Prize in 2009.

There are a lot more cool details about the function of CCDs over at the wikipedia page, and many researchers are still finding ways to improve CCDs!

Advertisements

Photoelectric Interactions

One of the strangest developments in modern physics came gradually, in the 19th century, as scientists learned more and more about the interactions between light and matter. In this post we’ll cover a few of the early experiments and their implications for both technology and our understanding of what light really is.

The first piece of the puzzle came when Becquerel found that shining a light on some materials caused a current to flow through the material. This is called the photovoltaic effect, because the photons incident on the material are causing a voltage difference which the current then follows. At the nanoscale, the photons are like packets of energy which can be absorbed by the many electrons in the material. In a semiconductor, some electrons can be moved this way from the valence band to the conduction band. Thus electrons that were previously immobile because they had no available energy states to jump to now have many states to choose from, and can use these to traverse the material! This effect is the basis of most solid state imaging devices, like those found in your digital camera (and trust me, we will delve further into that technology soon!).

But as it turns out, if you use photons that have a high enough energy, you can not only bump electrons out of their energy levels, you can cause them to leave the material entirely! This is called the photoelectric effect, and in some senses it seems like a natural extension of the photovoltaic effect: another consequence of light’s electromagnetic interaction with charged particles.

But actually, there is a very interesting scientific consequence of the experimental details of photoelectric effect. Imagine shining a light on a material, and observing the emitted electrons. You can change the light source in various ways, for example by changing the color or the brightness. Blue light has a shorter wavelength than red light, and thus more energy, but if you are only changing the color of the light you won’t see any difference in the electron output (unless you tune the energy low enough that no electrons are ejected, in which case you are back to the photovoltaic effect). But, if you change the intensity of the light, you find that brighter light causes more electrons to be ejected. This matters because at the time, light was thought of as a wave in space, an electromagnetic oscillation that could move through the world in certain ways. Waves with more energy were expected to liberate more electrons, just as higher energy waves in the ocean cause more erosion of the rocks and sand on the shore. But the experiment above disproves that idea, because it’s not the energy of each wave packet that matters but their overall number, which is the intensity of the light! So the photoelectric effect proves that light is quantized: while it has wave characteristics, it also has particle characteristics and breaks down into discrete packets.

The photoelectric effect is used in a few very sensitive photodetectors, but is not as common technologically as the photovoltaic effect. But there are a few weird consequences of the photoelectric effect, especially in space. Photons from the sun can excite electrons from space stations and shuttles, and since the electrons  then flow away and aren’t replenished, this can give the sun-facing side a positive charge. The photoelectric effect also causes dust on the moon to charge, to the point that some dust is electrostatically repelled far from the surface. So even though the moon has no significant gas atmosphere as we have on earth, it has clouds of charged dust that fluctuate in electromagnetic patterns across its face.

Getting Started in Science Communication

[Adapted from an email to a university researcher interested in getting more involved in public engagement/science communication. All links are UK-specific; if you have questions about other areas/countries please don’t hesitate to ask in comments and I’ll do my best to help!]

Image

I’m glad to hear that you’re interested in getting more involved in public engagement, as I feel it’s a very valuable and worthwhile use of scientists’ time. In terms of how to get involved it’s a wide and varied arena – there’s plenty of different avenues depending on your interests. While there are actual Science Communication programmes/degrees out there it’s by no means necessary to devote years of your life to another programme when you could just  work on it in conjunction with what you’re already doing. Though some people come to science communication through an education background, that’s by no means the only way. Plenty of successful science communicators started off as researchers and relayed their knowledge and enthusiasm into public engagement (for example, all of my current colleagues here at Dundee Science Centre have a scientific background, not education).

If you’re interested in working with young people you will have the chance to get involved in lots of science communication activities through the STEM Ambassador programme – everything from school visits to festival volunteering, and each one will give you a bit more insight into what science communication is and how you can contribute to it. Sign up online to receive an invitation to a local induction near you!

If you’re interested in developing your public engagement skills I’d definitely suggest checking out British Interactive Group (BIG). They will be having a one-day conference about science communication in January that might be a great place to start off your involvement, but there’s plenty of other resources on the website as well. They have a mailing group that daily has interesting discussions on a variety of topics, as does PSI-COM; I’m subscribed to both and find them immensely entertaining and useful.

Checking out the various science festivals around the country will give you an idea of what people are doing, and maybe help you start thinking of how you can communicate what you’re passionate about to general audiences. Festivals have the benefit of having events aimed at a wide range of audiences, so you can experience a variety of engagement strategies in one place.

There are tons of great science centres across the UK, and again, it’s a good place to start if you want to see what the public are interested in and ways science communicators are delivering that information. I always make it a priority to visit the science centre when I’m in a new city; it’s always a good day out.

The British Science Association are a great resource, and you can get involved with your local branch to find out more about what they’re doing in your area.

It’s also worth checking if your university has a Public Engagement office (or similar) dedicated to supporting staff and students and engaging with the local community. Often times they work quite closely with all the folk listed above so they’re a good place to start if you’re just beginning.

These are of course just a few starting points and by no means an exhaustive list. There’s no ‘right’ way to go about getting involved in public engagement, and if you’ve ever explained your research at a party or helped someone with their schoolwork then guess what? You’re already well on your way!

Plasma Displays and Plasma Physics

You may have noticed one big technology missing from my recent post on how displays work: I didn’t talk about plasma displays! I wanted to have more space to discuss what plasma actually is before getting into the detail of how the displays work, and that’s what today’s post is about.

Plasmas are usually considered a state of matter. But whereas order and density distinguish the other states of matter from each other—solids are dense and ordered, liquid are dense and disordered, and gases are disperse and disordered—for plasma there is another factor that is important. Plasmas are disperse and disordered like gases, but they are also ionized. Whereas a non-ionized gas consists of atoms, in an ionized gas the negatively charged electrons have been stripped from the positively charged atomic nuclei and both are moving freely through the gas. The electrons and nuclei are both called ions, to indicate that they carry a charge. Remembering the attractive force that oppositely charged particles experience, it might seem like a plasma would be pretty short-lived! Electrons and nuclei form stable atoms together because that is a low-energy configuration, which means it’s very appealing for the plasma to recombine into regular atoms. And in fact that’s what happens if you let it cool down, but if you keep the plasma temperature high, the ions are more likely to stay separated. In fact, how many of the atoms are ionized depends roughly on the plasma temperature. Hotter plasmas often have nearly all of their atoms broken apart and ionized, whereas cooler plasmas may be only partly ionized. But the more ions you have, the more electromagnetic interactions occur within the plasma because of all the free charge, and this is what makes plasmas behave differently from non-ionized gases.

A hot gas of ions may sound somewhat removed from the quotidian phases of solid, liquid, and gas. But actually, plasma is the most common phase of luminous matter in the universe, prevalent both in stars and in the interstellar medium. (I say luminous matter here to distinguish from dark matter, which seems to make up more total mass than the matter we can see, and whose phase and nature are both unknown.) There are also lots of examples of plasmas here on Earth, such as lightning bolts, the northern lights, and the neon that lights up a neon sign. You may have noticed that these are all light-emitting phenomena;  the high energy of the ions means that they have many lower energy states available to jump to, and those energy level changes often involve emitting a photon to produce visible light.

So how can plasma be controlled to make a display? Illumination comes from tiny pockets of gas that can be excited into a plasma by applying an electrical current, and each pixel is defined by a separate plasma cell. For monochrome displays, the gas can be something like neon which emits light that we can see. But to create multiple colors of light, various phosphors are painted in front of the plasma cells. The phosphors absorb the light emitted by the plasma and emit their own light in a variety of colors (this is also how color CRT displays work). Plasma displays tend to have better contrast than LCDs and less dependence on viewing angle, but they also consume quite a bit of energy as you might expect from thinking about keeping the ions in the plasma separated.

There are a lot of other cool things about plasmas, like how they can be contained by electromagnetic fields and how they are used in modern industrial processing to etch semiconductors and grow nanomaterials. But for further reading I definitely recommend the wikipedia article on plasmas.