Tag Archives: photons

Quantum Worldview

I have always loved the kind of science fiction where you think about a world that is largely like our own, but in some fundamental way different. What if we all lived underwater, or if the force of gravity was lower, or if the sun were a weaker star? To me, that’s what the world of quantum physics is like: in a lot of ways it’s similar to our own world, in fact it’s the basis of our world! But it’s also crazy and strange. So what would it be like if we were quantum creatures, if we could actually see how everything around us is quantized?

Well first, there’s what it means to be quantum. A quantum of anything is a piece that can’t be subdivided any further, the smallest possible unit. But this implies a sort of graininess, where rather than a continuous stream of, say, light, we start to see at the small scale that light is actually composed of little chunks, quanta of light. Imagine being able to see how everything around you is made of discrete pieces, from light to sound to matter. When the sun came up, you’d see it getting lighter in jumps. When you turn up your music, you’d hear each step of higher volume. And as your hair grew, you’d see it lengthening in little blips.

And at the quantum scale, the wave nature of everything becomes indisputably clear. We normally think of waves as something that emerges from a lot of individual objects acting together, like the water molecules in the sea, or people in a crowd. But if you look at quanta, you actually find that those indivisible packets of light or sound or matter are also waves, waves in different fields of reality. That’s hard to get your head around, but think of it this way: as a quantum wave, if you passed right by a corner, you could actually bend around the back of it a little the way that ripples going around a rock in water do. Things like electrons and photons of light actually do this, so for example the pattern below is made by light going through a circular hole, and the wave diffraction is clearly visible.

Amazingly, as a wave, you could actually have a slight overlap with the person next to you. This gets at something that’s key to quantumness: the probabilistic nature of it. Say I thought you were nearby, and I wanted to measure your position somehow to see how close you were to me. I’d need to get a quantum of something else to interact with you, but because it would be a similar size to you, it would slightly change your position, or your speed. We don’t notice the recoil when sound waves bounce off us in the macroscale world, but if we were very small we would! So there is actually a built in uncertainty when dealing with quantum objects, but we can say there’s a probability that they are in one place or another. So as a quantum creature, you can think of yourself as a little wave of probability, that collapses to a point when measured but then expands out again after. When I’m not measuring you, where are you really? Well I can’t say physically, and this is why you can have a little wave overlap with your neighbour without violating the principle that you can’t both be in the same place at the same time.

And imagine that you’re next to a wall. As a wave you may have a little overlap of probability with that wall. And if the wall is thin enough, as a quantum object there is actually some chance that you’ll pass through the wall entirely! This is called quantum tunnelling, and actually it’s happening all the time in the electronics inside your phone. Modern microelectronics work in part because we can use effects from the quantum world in our own, larger world!

It’s difficult to imagine a world where everything happens in discrete chunks, where I can see myself as a wave, where I don’t know where I am until someone else interacts with me. But this is the world at the quantum scale, and it’s not science fiction!

Light, Scattering, and Why the Sky is Blue

I’ve been writing more about light recently, so I wanted to cover a basic question that most people first ask as children: why is the sky blue? We can tell that the blue color of the sky is related to sunlight, because  at night, we can see out to the black of space and the stars. We also know it’s related to the atmosphere, because in photos from places like the Moon which have no atmosphere, the sky is black even when the Sun is up. So what’s going on?

When light from the Sun reaches Earth, its photons have a combination of wavelengths (or energies), and we call the sum of all of those the solar spectrum. Some of these wavelengths of light are absorbed by particles in the atmosphere, but others are scattered, which means that the photons in question are deflected to a new direction. Scattering of light happens all around us, because the electromagnetic wave nature of photons makes them very sensitive to variations in the medium through which they travel. Other than absorption of light, scattering is the main phenomenon that affects color.

There are a few different types of scattering. We talked about one type recently, when discussing metamaterials and structural color: light can be scattered by objects that are a size similar to the wavelength of that light. That is called Mie scattering, and it’s why clouds appear solid even though they are mostly empty. The clouds are formed of tiny droplets, around the size of the visible wavelengths of light, and when these droplets scatter white light, the clouds themselves appear diffuse and white. Milk also appears white because it has proteins and fat in tiny droplets, suspended in water, which scatter white light.

However, even objects much smaller than the wavelength of light can induce scattering. The oxygen and nitrogen molecules in the atmosphere can also act as scatterers, in what’s called Rayleigh scattering (or sometimes the Tyndall effect). For Rayleigh scattering, these molecules can be affected by the electromagnetic field that the photon carries. A molecule can be polarized, meaning the positive and negative charges in the molecule move in opposite directions, and then the polarized molecule interacts with the light by scattering it. But, the polarizability of individual molecules depends on the wavelength of the incoming light, meaning that some wavelengths will scatter more strongly than others. When Rayleigh worked out the mathematical form of this dependence, in 1871, he found that the scattering was inversely proportional to the fourth power of the wavelength of light, which means that blue light (which has a smaller wavelength) will scatter much more strongly than red light (which has a larger wavelength).

Thus, we see the Sun as somewhat yellow, because only the longer wavelength light in red and yellow travels directly to us. The shorter wavelength blue light is scattered away into the sky, and comes to our eyes on a very circuitous and scattered route that makes it look like the blue light is coming from the sky itself. At sunset, the sun appears even redder because of the increased amount of atmosphere that the light has travelled through, scattering away even more blue light. And, when there is pollution in the air, the sun can appear redder because there are more scattering centers that scatter away the blue light.

Of course, the fact that blue light scatters more is only half the story. If that were all there is to it, we’d see the sky as a deep violet, because that’s the shortest wavelength of light that our eyes can see. But even though we can see the violet in a rainbow, our eyes are actually much less sensitive to it than they are to blue light. Our eyes perceive color using special neurons called cones, and of the three types of cones, only one can detect blue and violet light. But the blue cone’s response to light peaks at around 450 nm, which is right in the middle of the blue part of the spectrum. So we see the sky as blue because it is the shortest wavelength that we’re capable of detecting in bulk. Different particles in the air can change the color of the sky, but so would different ways of sensing color. So Rayleigh scattering determines which light is scattered, and our visual system determines which of that light we see best: sky blue.

Living Stars: What is Bioluminescence?

Recently something unusual happened: I had an idea that was illustrated and published in Wired. They have a gallery of hybrid animals up, including drawings made by students in the CSU Monterey Bay Science Illustration Program, and my contribution was bioluminescent starlings. I personally think that watching a murmuration of glowing starlings flocking would be amazing. But how does bioluminescence work exactly?

Bioluminescence is light emission from a living creature. How does that happen? Remember that light is a form of energy, and if a particle undergoes a transition from one energy level to another, the difference in energy has to go somewhere and may be emitted as light. Much of the light we get from the sun comes from atomic energy level transitions that happen inside it. But the same thing can also occur in more complex chemical reactions: excess energy can be used to create a new compound, or heat up the reactants, but it may also be emitted as light. (Whether or not this happens depends on the mechanism of the chemical reactions and, as usual, on energy minimization.)

So bioluminescence occurs when a chemical reaction happens, inside a living organism, that emits light. It’s actually relatively common in deep-sea creatures, who don’t have much other light around. But it’s also seen closer to shore in bioluminescent algae, and on dry land with fireflies. What these creatures have in common is that they produce luciferin, a class of pigments that can be oxidized to produce light, and luciferase, an enzyme that catalyzes the reaction. These creatures can then use the bioluminescence to communicate with other creatures, for camouflage, luring prey, or attracting mates.

Some plants show bioluminescence too, though there are many competing theories on whether they gain some evolutionary advantage from it or not. But there are also many researchers working to introduce bioluminescence into plants and animals, by adding the genes that create luciferin and luciferase, or by adjusting their expression. Self-lighting could help with imaging, but making more things bioluminescent has both a practical and an aesthetic appeal.

Visible Light and the Electromagnetic Spectrum

We already know the basics of light: it’s electromagnetic energy, carried through space as a wave, in discrete packets called photons. But photons come in a variety of energies, and different energy photons can be used for different real-world applications. The energy of a photon determines, among other things, how quickly the electromagnetic wave oscillates. Higher energy photons oscillate more quickly than lower energy photons, so we say that high-energy photons have a higher frequency.

This frequency isn’t related to the speed that the photons travel, though. They can oscillate more or fewer times over a given distance, but still traverse that distance in the same amount of time. And as we know, the speed of light is given by Maxwell’s Equations for electromagnetism, and is constant regardless of reference frame! But another way to look at frequency is by considering the wavelength of light. Picture two photons which are traveling through space, at the same speed, but with one oscillating faster than the other. Thus one photon is high-frequency and one is low-frequency. While traversing the same distance, the high-frequency photon will oscillate more times than the low-frequency photon, so the distance covered by each cycle is smaller. We call this distance for a single cycle the wavelength, and it’s inversely proportional to the frequency. Long-wavelength photons are low-frequency, and short-wavelength photons are high frequency. Overall the range of photon frequencies is called the electromagnetic spectrum.

On Earth, photons come from an external source, often the sun, and are reflected off various objects in the world. The photons of a specific color may be absorbed, and thus not seen by an observer, which will make the absorbing object look like the other non-absorbed colors. If there are many absorbed photons or few photons to begin with, an object may just look dark. Our eyes contain molecules capable of detecting photons in the wavelength range 400-700 nanometers and passing that signal to our brains, so this is called the visible wavelength range of the electromagnetic spectrum. But it’s the interaction of photons with the world around us, and then with the sensing apparatus in our eyes, that determines what we see. Other creatures that have sensors for different frequencies of light, or who have more or less than three types of cones, may perceive the color and shape of things to be totally different. And, the visible spectrum is only a small slice of the total range of photon frequencies, as you can see in the image below!

Photons that are slightly lower energy than the visible range are called infrared, and our skin and other warm objects emit photons in the infrared. Night-vision goggles often work using infrared photons, and some kinds of snakes can see infrared. Even lower energy photons have a lot of practical uses: microwave photons can be used to heat material, and radio waves are photons with such low energy that they’re useful for long-range communication! Long wavelength photons are difficult to absorb or alter, so they’re also really useful for astronomy, for example to observe distant planets and stars.

The sun emits photons in the visible range, but it also emits a lot of photons with a slightly higher energy, called ultraviolet or UV. Sunscreen blocks UV frequency photons because they can cause biological tissue to heat up or even burn slightly, and that is sunburn! At even higher frequencies, x-rays are a type of photon that are widely used in biomedical imaging, because they can penetrate tissue and show a basic map of a person’s bones and organs without surgery. And very high energy gamma rays are photons which result from chemical processes in the nuclei of atoms, which can pass through most material. I’ll talk a bit more about x-rays and gamma rays soon, as part of a larger discussion of radiation.

There is a lot more to light than visible light, and the various parts of the electromagnetic spectrum are used in many applications. Each wavelength gives us different information about the world, and we can use technology to extend the view that we’re biologically capable of to include x-rays, infrared, and many other parts of the electromagnetic spectrum!

A Quick Introduction to Photonics

Last time when we talked about CCDs, we were concerned with how to take an optical signal, like an image, and convert it to an electronic signal. Then it can be processed, moved, and stored using electronics. But there is an obvious question this idea raises: why is the conversion to electronic signal needed? Why can’t we process the optical signal directly? Is there a way to manipulate a stream of photons that’s analogous to the way that electronic circuits manipulate streams of electrons?

The answer is yes, and the field dealing with optical signal processing is called photonics. In the same way that we can generate electronic signals and manipulate them, signals made up of light can be generated, shuffled around, and detected. While the underlying physical mechanisms are different from those in electronics, much of the same processing can take place! There are a lot of cool topics in photonics, but let’s go over some of the most basic technology just to get a sense for how it all works.

The most common way to generate optical signals in photonics is by using a laser diode, which is actually another application of the p-n junction. Applying a voltage across the junction itself causes electrons to drift into the junction from one side, while holes (which are oppositely charged) drift in from the other side. This “charge injection” results in a net current flow, but it also means that some electrons and holes will meet in the junction. When this happens, they can recombine if the electron falls into the empty electron state that the hole represents. But there is generally an energy difference between the free electron and free hole state, and this energy can then be emitted as a photon. This is how light with a specific energy is generated in the semiconductor laser diode, and when the junction is attached to an enclosed area to amplify that light, you get a very reliable light source that is easy to modulate in order to encode a signal.

But how do you send that signal anywhere else? Whereas electronic signals pass easily through metal wires, photonic signals are commercially transmitted through transparent optical fibers (hence the term “fiber optic”). Optical fibers take advantage of total internal reflection, a really cool phenomenon where for certain angles at an interface, all incident light is reflected off the interface. Since light is a quantized electromagnetic wave, how it moves through its surroundings depends on how easy it is to make the surrounding medium oscillate. Total internal reflection is a direct consequence of Snell’s Law, which describes how light changes when it goes between media that are not the same difficulty for light to pass through (the technical term for this is refractive index). So optical fibers consist of a fiber with high refractive index which is clad in a sheath with lower refractive index, tuned so that the inner fiber will exhibit total internal reflection for a specific wavelength of light. You can see an example of total internal reflection below, for light travelling through a plastic surrounded by air. When optical fibers exhibit total internal reflection, they can transmit photonic signals over long distances, with less loss than an electronic signal moving through a long wire would experience, as well as less susceptibility to stray electromagnetic fields.

Photonic signals can then be turned back into electronic signals using semiconducting photodetectors, which take advantage of the photoelectric effect. This technology is the basis of most modern wired telecommunications, including the Internet!

But if you are remembering all the electronic components, like resistors and capacitors and transistors, which we use to manipulate electronic signals, you may be wondering what the corresponding parts are for photonics. There are photonic crystals, which have microstructure that affects the passage of light, of which opal is a naturally occurring example! And photonic signals can be recorded and later read out on optical media like CDs and DVDs. But in general, the commercial possibilities of optical data transmission have outweighed those of complex photonic signal analysis. That’s why our network infrastructure is photonic but our computers, for now, are electronic. However, there are lots of researchers working in this area, so that could change, and that also means that if you find photonics interesting there is much more to read!

The Electronic Eye: Charge-Coupled Devices

Now that we’ve looked at some of the basic interactions between electrons and light, we can turn our focus to electronic photodetectors: devices that can sense light and respond by producing a current. A simple example is semiconductor photomultipliers, like we talked about last time, which are able to very sensitively measure the intensity of light that impacts the photomultiplier.

But what do we do if we want to record an image: a two-dimensional map of the intensity of incident light? In traditional photography,  silver halide crystals on photographic film interact with incident photons, then the chemical development process causes the altered crystals to darken. Since semiconductors generate a number of electrons proportional to the number of incident photons, you might think it would be easy to develop a similar digital process. But the major issue for digital imaging was not so much sensitivity as signal processing: if you have a thin film which is electrically responding to light, how do you read out the resultant electronic signal without losing spatial resolution?

Because of these difficulties, early electronic imaging was done using vacuum tubes, a bulky but effective technology we’ve discussed several times before. Many researchers were looking for a practical means of imaging with semiconductors, but the major breakthrough came in 1969, when Boyle and Smith had the basic idea for structuring a semiconductor imaging device in what’s now called the charge-coupled device (CCD).

To retain spatial resolution in an image, the photoresponsive semiconductor in a CCD is divided into an array of capacitors, each of which can store some amount of charge. One way to picture it is as an array of wells, where a rainstorm can dump a finite amount of water into any wells under it, and that water remains separate from the water in neighboring wells. But in a CCD, as photons enter the semiconductor and generate electrons, electrodes at different voltages create potential wells to trap the electrons. These wells define what we call pixels: the smallest possible subdivisions of the image.

However, to be able to make an image we also need to be able to measure how much has accumulated in each well. In our device, this means moving the electrons to an amplifier. But how can we transfer those wells of electrons without letting them mix with each other (which would blur the image) or leak out of the device altogether? To accomplish this, we need the wells confining the electrons to be mobile! But remember that the wells themselves are defined by applying voltages to a patterned array of electrodes. This means moving the well is possible, by lowering the potential directly in front of a well and raising the potential directly behind it. This idea is illustrated below for a system with three coupled potential arrays, a silicon oxide insulating layer, and an active region of p-doped silicon.

You can imagine that, instead of our previous array of wells to map rainfall, we have an array of buckets and a brigade of volunteers to pass the buckets to a measurement point. The principle is sometimes called a bucket brigade, and the general method of moving electronic outputs forward is termed a shift register. The physical implementation in CCDs, using voltages which are cycling high and low, is called clocking.

In general, the charge in a CCD will be transferred down the columns of the detector and then read out row by row by an electronic amplifier, which converts charge to voltage. Since this is a serial process, if the efficiency of transferring charge from one pixel to the next is 99%, then after moving through 100 pixels only 36% of the electrons will be left! So for a 10 megapixel camera, where the charge may pass through as many as 6,000 pixels before being measured, the charge transfer efficiency has to be more like 99.9999%! Historically, this was first achieved by cooling the CCDs using liquid nitrogen to reduce thermal noise, a practical approach for detectors on spacecraft but one that initially limited commercial applications. But eventually CCDs were made with decent transfer efficiency at room temperature, and this has been the main technological factor behind the development of digital photography. CCDs themselves don’t distinguish between different colors of photons, but color filters can be placed over different pixels to create a red channel, a green channel, and a blue channel that are recombined to make a color image. CCDs are the image sensors in all of our digital cameras and most of our phone cameras, and it was partly for this enormous technological impact that Boyle and Smith received the Nobel Prize in 2009.

There are a lot more cool details about the function of CCDs over at the wikipedia page, and many researchers are still finding ways to improve CCDs!

Photoelectric Interactions

One of the strangest developments in modern physics came gradually, in the 19th century, as scientists learned more and more about the interactions between light and matter. In this post we’ll cover a few of the early experiments and their implications for both technology and our understanding of what light really is.

The first piece of the puzzle came when Becquerel found that shining a light on some materials caused a current to flow through the material. This is called the photovoltaic effect, because the photons incident on the material are causing a voltage difference which the current then follows. At the nanoscale, the photons are like packets of energy which can be absorbed by the many electrons in the material. In a semiconductor, some electrons can be moved this way from the valence band to the conduction band. Thus electrons that were previously immobile because they had no available energy states to jump to now have many states to choose from, and can use these to traverse the material! This effect is the basis of most solid state imaging devices, like those found in your digital camera (and trust me, we will delve further into that technology soon!).

But as it turns out, if you use photons that have a high enough energy, you can not only bump electrons out of their energy levels, you can cause them to leave the material entirely! This is called the photoelectric effect, and in some senses it seems like a natural extension of the photovoltaic effect: another consequence of light’s electromagnetic interaction with charged particles.

But actually, there is a very interesting scientific consequence of the experimental details of photoelectric effect. Imagine shining a light on a material, and observing the emitted electrons. You can change the light source in various ways, for example by changing the color or the brightness. Blue light has a shorter wavelength than red light, and thus more energy, but if you are only changing the color of the light you won’t see any difference in the electron output (unless you tune the energy low enough that no electrons are ejected, in which case you are back to the photovoltaic effect). But, if you change the intensity of the light, you find that brighter light causes more electrons to be ejected. This matters because at the time, light was thought of as a wave in space, an electromagnetic oscillation that could move through the world in certain ways. Waves with more energy were expected to liberate more electrons, just as higher energy waves in the ocean cause more erosion of the rocks and sand on the shore. But the experiment above disproves that idea, because it’s not the energy of each wave packet that matters but their overall number, which is the intensity of the light! So the photoelectric effect proves that light is quantized: while it has wave characteristics, it also has particle characteristics and breaks down into discrete packets.

The photoelectric effect is used in a few very sensitive photodetectors, but is not as common technologically as the photovoltaic effect. But there are a few weird consequences of the photoelectric effect, especially in space. Photons from the sun can excite electrons from space stations and shuttles, and since the electrons  then flow away and aren’t replenished, this can give the sun-facing side a positive charge. The photoelectric effect also causes dust on the moon to charge, to the point that some dust is electrostatically repelled far from the surface. So even though the moon has no significant gas atmosphere as we have on earth, it has clouds of charged dust that fluctuate in electromagnetic patterns across its face.