Tag Archives: electromagnetism

Light, Scattering, and Why the Sky is Blue

I’ve been writing more about light recently, so I wanted to cover a basic question that most people first ask as children: why is the sky blue? We can tell that the blue color of the sky is related to sunlight, because  at night, we can see out to the black of space and the stars. We also know it’s related to the atmosphere, because in photos from places like the Moon which have no atmosphere, the sky is black even when the Sun is up. So what’s going on?

When light from the Sun reaches Earth, its photons have a combination of wavelengths (or energies), and we call the sum of all of those the solar spectrum. Some of these wavelengths of light are absorbed by particles in the atmosphere, but others are scattered, which means that the photons in question are deflected to a new direction. Scattering of light happens all around us, because the electromagnetic wave nature of photons makes them very sensitive to variations in the medium through which they travel. Other than absorption of light, scattering is the main phenomenon that affects color.

There are a few different types of scattering. We talked about one type recently, when discussing metamaterials and structural color: light can be scattered by objects that are a size similar to the wavelength of that light. That is called Mie scattering, and it’s why clouds appear solid even though they are mostly empty. The clouds are formed of tiny droplets, around the size of the visible wavelengths of light, and when these droplets scatter white light, the clouds themselves appear diffuse and white. Milk also appears white because it has proteins and fat in tiny droplets, suspended in water, which scatter white light.

However, even objects much smaller than the wavelength of light can induce scattering. The oxygen and nitrogen molecules in the atmosphere can also act as scatterers, in what’s called Rayleigh scattering (or sometimes the Tyndall effect). For Rayleigh scattering, these molecules can be affected by the electromagnetic field that the photon carries. A molecule can be polarized, meaning the positive and negative charges in the molecule move in opposite directions, and then the polarized molecule interacts with the light by scattering it. But, the polarizability of individual molecules depends on the wavelength of the incoming light, meaning that some wavelengths will scatter more strongly than others. When Rayleigh worked out the mathematical form of this dependence, in 1871, he found that the scattering was inversely proportional to the fourth power of the wavelength of light, which means that blue light (which has a smaller wavelength) will scatter much more strongly than red light (which has a larger wavelength).

Thus, we see the Sun as somewhat yellow, because only the longer wavelength light in red and yellow travels directly to us. The shorter wavelength blue light is scattered away into the sky, and comes to our eyes on a very circuitous and scattered route that makes it look like the blue light is coming from the sky itself. At sunset, the sun appears even redder because of the increased amount of atmosphere that the light has travelled through, scattering away even more blue light. And, when there is pollution in the air, the sun can appear redder because there are more scattering centers that scatter away the blue light.

Of course, the fact that blue light scatters more is only half the story. If that were all there is to it, we’d see the sky as a deep violet, because that’s the shortest wavelength of light that our eyes can see. But even though we can see the violet in a rainbow, our eyes are actually much less sensitive to it than they are to blue light. Our eyes perceive color using special neurons called cones, and of the three types of cones, only one can detect blue and violet light. But the blue cone’s response to light peaks at around 450 nm, which is right in the middle of the blue part of the spectrum. So we see the sky as blue because it is the shortest wavelength that we’re capable of detecting in bulk. Different particles in the air can change the color of the sky, but so would different ways of sensing color. So Rayleigh scattering determines which light is scattered, and our visual system determines which of that light we see best: sky blue.

Plasmons, Shiny Metals, and Stained Glass

Remember plasmas, the phase of matter where atoms are ripped apart into electrons and nuclei? Plasmas are primed for strong electromagnetic interactions with the world around them, because of all the loose charged particles. They can be used to etch down into surfaces and catalyze chemical reactions, though the ions in a plasma won’t necessarily react with every form of matter they come across. And you can actually use an electromagnetic field on its own to contain a plasma, because of the plasma’s sensitivity to electromagnetic force. The most common design for a fusion reactor, the tokamak, uses a doughnut-shaped field to contain a plasma.

That’s how plasmas work at the macroscale, but it’s the individual charged ions in the plasma which react to electromagnetic force. Their interactions sum to a larger observable phenomena, which emerges from nanoscale interactions. But interestingly, the collective interactions of these ions can actually be approximated as discrete entities, called quasiparticles. We’ve talked about quasiparticles before, when we talked about holes which are quasiparticles defined by the absence of an electron. But in plasmas, the collective motion of the ions can also be considered as a quasiparticle, called a plasmon. Each individual ion is responding to its local electromagnetic field, but the plasmon is what we see at a larger scale when everything appears to be responding in unison. A plasmon isn’t actually a particle, just a convenient way to think about collective phenomena.

Plasmons can be excited by an external electromagnetic stimulus, such as light. And actually, anyone who has looked up at a stained glass window has witnessed plasmonic absorption of light! Adding small amounts of an impurity like gold to glass results in a mixture of phases, with tiny gold nanoparticles effectively suspended in the amorphous silica that makes up glass. Gold, like many metals, has a high electron density, and the electrons effectively comprise a plasma within each nanoparticle. When light shines through the colored glass, some wavelengths are plasmonically absorbed and others pass through. Adding a different metal to the glass can change the color, and so can different preparations of the glass that modify the size of the included nanoparticles. So all the colors in the window shown below are due to differing nanoparticles that plasmonically absorb light as it passes through!

Now you might ask, what determines which wavelengths of light pass through and which don’t? In the case of stained glass, it has to do with the size of the nanoparticles and the metal. But more generally, plasmas have a characteristic frequency at which they oscillate most easily, called the plasma frequency. The plasma frequency depends on several fundamental physical constants, including the mass and charge of an electron, but notably it also depends on the density of electrons in the plasma. For nanoparticles, the size of the particle also affects the response frequency. The practical upshot of the plasma frequency, though, is that if incident light has a frequency higher than the plasma frequency, the electrons in the plasma can’t respond fast enough to couple to the light, and it passes through the material. So the material properties that dictate the plasma frequency also determine whether light will be absorbed or transmitted.

For most metals that aren’t nanoscale, the plasma frequency is somewhere in the ultraviolet range of the electromagnetic spectrum.  Thus, incident visible light is reflected by the free electron plasma in the metal, right at the surface of the material. And that’s why metals appear shiny!

Visible Light and the Electromagnetic Spectrum

We already know the basics of light: it’s electromagnetic energy, carried through space as a wave, in discrete packets called photons. But photons come in a variety of energies, and different energy photons can be used for different real-world applications. The energy of a photon determines, among other things, how quickly the electromagnetic wave oscillates. Higher energy photons oscillate more quickly than lower energy photons, so we say that high-energy photons have a higher frequency.

This frequency isn’t related to the speed that the photons travel, though. They can oscillate more or fewer times over a given distance, but still traverse that distance in the same amount of time. And as we know, the speed of light is given by Maxwell’s Equations for electromagnetism, and is constant regardless of reference frame! But another way to look at frequency is by considering the wavelength of light. Picture two photons which are traveling through space, at the same speed, but with one oscillating faster than the other. Thus one photon is high-frequency and one is low-frequency. While traversing the same distance, the high-frequency photon will oscillate more times than the low-frequency photon, so the distance covered by each cycle is smaller. We call this distance for a single cycle the wavelength, and it’s inversely proportional to the frequency. Long-wavelength photons are low-frequency, and short-wavelength photons are high frequency. Overall the range of photon frequencies is called the electromagnetic spectrum.

On Earth, photons come from an external source, often the sun, and are reflected off various objects in the world. The photons of a specific color may be absorbed, and thus not seen by an observer, which will make the absorbing object look like the other non-absorbed colors. If there are many absorbed photons or few photons to begin with, an object may just look dark. Our eyes contain molecules capable of detecting photons in the wavelength range 400-700 nanometers and passing that signal to our brains, so this is called the visible wavelength range of the electromagnetic spectrum. But it’s the interaction of photons with the world around us, and then with the sensing apparatus in our eyes, that determines what we see. Other creatures that have sensors for different frequencies of light, or who have more or less than three types of cones, may perceive the color and shape of things to be totally different. And, the visible spectrum is only a small slice of the total range of photon frequencies, as you can see in the image below!

Photons that are slightly lower energy than the visible range are called infrared, and our skin and other warm objects emit photons in the infrared. Night-vision goggles often work using infrared photons, and some kinds of snakes can see infrared. Even lower energy photons have a lot of practical uses: microwave photons can be used to heat material, and radio waves are photons with such low energy that they’re useful for long-range communication! Long wavelength photons are difficult to absorb or alter, so they’re also really useful for astronomy, for example to observe distant planets and stars.

The sun emits photons in the visible range, but it also emits a lot of photons with a slightly higher energy, called ultraviolet or UV. Sunscreen blocks UV frequency photons because they can cause biological tissue to heat up or even burn slightly, and that is sunburn! At even higher frequencies, x-rays are a type of photon that are widely used in biomedical imaging, because they can penetrate tissue and show a basic map of a person’s bones and organs without surgery. And very high energy gamma rays are photons which result from chemical processes in the nuclei of atoms, which can pass through most material. I’ll talk a bit more about x-rays and gamma rays soon, as part of a larger discussion of radiation.

There is a lot more to light than visible light, and the various parts of the electromagnetic spectrum are used in many applications. Each wavelength gives us different information about the world, and we can use technology to extend the view that we’re biologically capable of to include x-rays, infrared, and many other parts of the electromagnetic spectrum!

Photoelectric Interactions

One of the strangest developments in modern physics came gradually, in the 19th century, as scientists learned more and more about the interactions between light and matter. In this post we’ll cover a few of the early experiments and their implications for both technology and our understanding of what light really is.

The first piece of the puzzle came when Becquerel found that shining a light on some materials caused a current to flow through the material. This is called the photovoltaic effect, because the photons incident on the material are causing a voltage difference which the current then follows. At the nanoscale, the photons are like packets of energy which can be absorbed by the many electrons in the material. In a semiconductor, some electrons can be moved this way from the valence band to the conduction band. Thus electrons that were previously immobile because they had no available energy states to jump to now have many states to choose from, and can use these to traverse the material! This effect is the basis of most solid state imaging devices, like those found in your digital camera (and trust me, we will delve further into that technology soon!).

But as it turns out, if you use photons that have a high enough energy, you can not only bump electrons out of their energy levels, you can cause them to leave the material entirely! This is called the photoelectric effect, and in some senses it seems like a natural extension of the photovoltaic effect: another consequence of light’s electromagnetic interaction with charged particles.

But actually, there is a very interesting scientific consequence of the experimental details of photoelectric effect. Imagine shining a light on a material, and observing the emitted electrons. You can change the light source in various ways, for example by changing the color or the brightness. Blue light has a shorter wavelength than red light, and thus more energy, but if you are only changing the color of the light you won’t see any difference in the electron output (unless you tune the energy low enough that no electrons are ejected, in which case you are back to the photovoltaic effect). But, if you change the intensity of the light, you find that brighter light causes more electrons to be ejected. This matters because at the time, light was thought of as a wave in space, an electromagnetic oscillation that could move through the world in certain ways. Waves with more energy were expected to liberate more electrons, just as higher energy waves in the ocean cause more erosion of the rocks and sand on the shore. But the experiment above disproves that idea, because it’s not the energy of each wave packet that matters but their overall number, which is the intensity of the light! So the photoelectric effect proves that light is quantized: while it has wave characteristics, it also has particle characteristics and breaks down into discrete packets.

The photoelectric effect is used in a few very sensitive photodetectors, but is not as common technologically as the photovoltaic effect. But there are a few weird consequences of the photoelectric effect, especially in space. Photons from the sun can excite electrons from space stations and shuttles, and since the electrons  then flow away and aren’t replenished, this can give the sun-facing side a positive charge. The photoelectric effect also causes dust on the moon to charge, to the point that some dust is electrostatically repelled far from the surface. So even though the moon has no significant gas atmosphere as we have on earth, it has clouds of charged dust that fluctuate in electromagnetic patterns across its face.

Plasma Displays and Plasma Physics

You may have noticed one big technology missing from my recent post on how displays work: I didn’t talk about plasma displays! I wanted to have more space to discuss what plasma actually is before getting into the detail of how the displays work, and that’s what today’s post is about.

Plasmas are usually considered a state of matter. But whereas order and density distinguish the other states of matter from each other—solids are dense and ordered, liquid are dense and disordered, and gases are disperse and disordered—for plasma there is another factor that is important. Plasmas are disperse and disordered like gases, but they are also ionized. Whereas a non-ionized gas consists of atoms, in an ionized gas the negatively charged electrons have been stripped from the positively charged atomic nuclei and both are moving freely through the gas. The electrons and nuclei are both called ions, to indicate that they carry a charge. Remembering the attractive force that oppositely charged particles experience, it might seem like a plasma would be pretty short-lived! Electrons and nuclei form stable atoms together because that is a low-energy configuration, which means it’s very appealing for the plasma to recombine into regular atoms. And in fact that’s what happens if you let it cool down, but if you keep the plasma temperature high, the ions are more likely to stay separated. In fact, how many of the atoms are ionized depends roughly on the plasma temperature. Hotter plasmas often have nearly all of their atoms broken apart and ionized, whereas cooler plasmas may be only partly ionized. But the more ions you have, the more electromagnetic interactions occur within the plasma because of all the free charge, and this is what makes plasmas behave differently from non-ionized gases.

A hot gas of ions may sound somewhat removed from the quotidian phases of solid, liquid, and gas. But actually, plasma is the most common phase of luminous matter in the universe, prevalent both in stars and in the interstellar medium. (I say luminous matter here to distinguish from dark matter, which seems to make up more total mass than the matter we can see, and whose phase and nature are both unknown.) There are also lots of examples of plasmas here on Earth, such as lightning bolts, the northern lights, and the neon that lights up a neon sign. You may have noticed that these are all light-emitting phenomena;  the high energy of the ions means that they have many lower energy states available to jump to, and those energy level changes often involve emitting a photon to produce visible light.

So how can plasma be controlled to make a display? Illumination comes from tiny pockets of gas that can be excited into a plasma by applying an electrical current, and each pixel is defined by a separate plasma cell. For monochrome displays, the gas can be something like neon which emits light that we can see. But to create multiple colors of light, various phosphors are painted in front of the plasma cells. The phosphors absorb the light emitted by the plasma and emit their own light in a variety of colors (this is also how color CRT displays work). Plasma displays tend to have better contrast than LCDs and less dependence on viewing angle, but they also consume quite a bit of energy as you might expect from thinking about keeping the ions in the plasma separated.

There are a lot of other cool things about plasmas, like how they can be contained by electromagnetic fields and how they are used in modern industrial processing to etch semiconductors and grow nanomaterials. But for further reading I definitely recommend the wikipedia article on plasmas.

The Many Roads from P-N Junctions to Transistors

When I called p-n junctions the building blocks of digital electronics, I was referring to their key role in building transistors. A transistor is another circuit element, but it is active, meaning it can add energy to a circuit, instead of passive like resistors, capacitors, or inductors which only store or dissipate charge. The transistor has an input where current enters the device and an output where current leaves, but also has a control electrode which can be used to modify the transistor’s function. Transistors can act as a switch, an amplifier, and can change the gain of a circuit (i.e. how many electrons come out compared to how many went in). So where did the transistor come from, and how do you build one?

The earliest devices which acted as transistors were called ‘triodes’, for their three inputs, and were made using vacuum tubes. A current could be transmitted from one electrode to the other, across the airless vacuum inside the tube. But applying a voltage to the third electrode induces an electric field which diverts the current, meaning that the third electrode can be used as a switch to turn current on and off. Triodes were in wide use for the first half of the twentieth century, and enabled many radio and telephone innovations, and in fact are still used in some specialty applications that require very high voltages. But they are quite fragile and consume a lot of power, which is part of what pushed researchers to find alternate ways to build a transistor.

Recall that the p-n junction acts as a diode, passing current in one direction but not the other. Two p-n junctions back to back, which could be n-p-n or p-n-p, will pass no current in any direction, because one of the junctions will always block the flow of charge. However, applying a voltage to the point where the p-n junctions are connected modifies the electric field, allowing current to pass. This kind of device is called a bipolar junction transistor (or BJT), because the p-n junction diodes respond differently to positive voltage than to negative voltage which means they are sensitive to the polarity of the current. (Remember all those times in Star Trek that they tried reversing the polarity? Maybe they had some diodes in backward!) The input of a bipolar junction transistor is called the collector, the output is called the emitter, and the region where voltage is applied to switch the device on is called the base. These are drawn as C, E, and B in the schematic shown below.

Bipolar Junction Transistor

Looking at the geometry of a bipolar junction transistor, you might notice that without the base region, the device is just a block of doped semiconductor which would be able to conduct current. What if there were a way to insert or remove a differently doped region to create junctions as needed? This can be done with a slightly different geometry, as shown below with the input now marked S for source, the output marked D for drain, and the control electrode marked G for gate. Applying a voltage to the gate electrode widens the depletion region at the p-n interface, which pinches off current by reducing the cross-section of p-type semiconductor available for conduction. This is effectively a p-n-p junction where the interfaces can be moved by adjusting the depletion region. Since it’s the electric field due to the gate that makes the channel wider or narrower, this device is called a junction field-effect transistor, or JFET.

Junction Field Effect Transistor

Both types of junction transistor were in widespread use in electronics from about 1945-1975. But another kind of transistor has since leapt to prominence. Inverting the logic that lead us to the junction field effect transistor, we can imagine a device geometry where an electric field applied by a gate actually creates the conducting region in a semiconductor, as in the schematic below. This device is called a metal-oxide-semiconductor field-effect transistor (or MOSFET), because the metal gate electrode is separated from the semiconductor channel by a thin oxide layer. Using the oxide as an insulator is pretty clever, because interfaces between silicon and its native oxide have very few places for electrons to get stuck, compared to the interfaces between silicon and other insulating materials. This means that the whole device, with oxide, p-type silicon, and n-type silicon, can be made in a silicon fabrication facility, many of which had already been built in the first few decades of the electronics era.

These two advantages over junction transistors gave MOSFETs a definite edge, but one final development has cemented their dominance. The combination of an n-channel MOSFET and a p-channel MOSFET together enable the creation of an extremely useful set of basic circuits. Devices built using pairs of one n-channel and one p-channel MOSFET working together are called CMOS, as shorthand for complementary metal-oxide-semiconductor, and have both lower power consumption and increased noise tolerance when compared to junction transistors. You might be asking, what are these super important circuits that CMOS is the best way of building? They are the circuits for digital logic, which we will devote a post to shortly!

Particles, Field Theory, and the Higgs Boson

The buzz around the discovery of the Higgs boson last week induced Erin to challenge me to explain what it is! Well, I’m not a particle physicist, but I do like talking about science, so here goes.

It’s my opinion that the easiest way to understand the Higgs boson is by starting from forces and fields. In day to day life, there are two broad sorts of forces that we are used to encountering. There are forces that come from expending energy to create mechanical action, like pushing a door or throwing a ball. But there are also forces that arise due to intrinsic fields, such as gravitational force on an apple falling or magnetic force on a compass. These fields provide a way to quantify the fact that at every point in space, there are gravitational and electromagnetic and other forces coming from other near or not so near objects. If a new object is introduced to a point in space, it feels forces due to those fields. One way to think about it is that the fields transmit forces between objects, like the gravitational field which transmits forces between the Earth and the Sun. Thus, there is no true vacuum, in part because there is a measurable  gravitational field. Quantum field theory takes things a step further and describes everything in terms of fields. Then, what we have been calling ‘particles’ are special mathematical solutions to the field equations, like oscillations of the underlying field.

But when thinking about fields providing force, a very good question to ask is: what’s the physical mechanism for that? When I push on a door, I generate movement by activating muscles, which turns one chemical into another, turning energy that was stored in chemical bonds into a mechanical form of energy. My hand transfers that energy to the door, via the interface between door and hand at the spot where I’m pushing, and then the door moves. So if there is really a magnetic field pushing a compass needle, how is energy transferred to the needle in order for it to move?

The current thinking in particle physics is that each of these fields has a corresponding particle that transfers the forces from that field. So for an electromagnetic interaction between two particles, there is actually a third particle type that is being exchanged, which is what conveys energy from the field. Usually these mediating particles are ‘virtual’, meaning they exist over very short distances but can have high energies (the requirement of short lifetime for high energy comes from the uncertainty principle between energy and time). In the Standard Model of particle physics, these mediators are called gauge bosons. For example, the electromagnetic field is mediated by the photon, which you may know as the quantum of light. There are special forces that are only noticeable on very short length scales, such as those in the nucleus of atoms. These are the nuclear strong and weak forces, mediated by gluons and W and Z bosons. An additional gauge boson for the force of gravity, called the graviton, has been theorized but not yet detected.

The Standard Model of particle interactions was intended as a framework for unifying the electromagnetic, strong, and weak forces, meaning that it had to account for the properties of gauge bosons. Accounting for the mass of these gauge bosons has been complex: the photon is massless, but the W and Z bosons have a significant mass. But in early formulations of the Standard Model, all particles were treated as massless, and it was a big issue to find a way to fit non-zero particle mass into the picture. The Higgs field is a means to that end, dreamt up by the theoretical physicist Peter Higgs in 1964. It is another field which interacts with particles and contributes to the forces that they experience, this time by giving them mass. The Higgs field is often described as a field which slows some particles down as if they were moving through treacle, but Higgs himself has mentioned his dislike of the idea that the particles experience drag or turbulence due to the Higgs field. The analogy with drag and turbulence implies that energy is being dissipated, but the Higgs field affects particles without reducing their energy. Higgs proposes thinking of it as similar to the refraction of light as it enters a medium like water. As the properties of light are changed by moving through water, so the properties of the particle are being changed by interaction with the Higgs field. How particles interact with the Higgs field determines their mass.

If the Higgs field is real, the corresponding gauge boson must exist. And, a new boson with many of the expected properties of the Higgs boson has now been found in two corroborating experiments at CERN, a particle physics laboratory in Geneva.

There are many different theoretical ways to add the Higgs field into the Standard Model. But most of them predict a fairly high energy for the Higgs boson, and so as accelerator energies rose progressively over the decades and the Higgs boson was not found, Higgs bosons of low mass were eliminated as possibilities. The Large Hadron Collider, which went online in 2008, was expected to have an energy range capable of either finding a Higgs boson that fit with one version of the Standard Model, or proving that the Standard Model contained a serious error. The newly discovered Higgs boson seems to fit the Standard Model, though more work is needed to figure out which version of the Standard Model was correct.

There are still lots of questions to be answered regarding the Standard Model, though. The Standard Model does not incorporate gravity or relativity. It doesn’t explain why the range of mass values for other fundamental particles is huge and kind of arbitrary, as opposed to the few fixed values for charge that fundamental particles can have. It also doesn’t explain why there is more matter than anti-matter (sometimes called CP-violation), or what dark matter or dark energy (both of which have been observed by astronomers) might be. So the discovery of the Higgs boson is definitely a triumph for the particle physics community, but there are still plenty of discoveries to be made!