Monthly Archives: September 2012

That’s Edutainment? – the fine line between catching their attention and losing the plot

Edutainment can be a dirty word, depending on who you ask. While the blending of entertainment and education seems to be a positive step – why wouldn’t you want people to enjoy learning? – it can be viewed with distaste by those who consider themselves ‘real’ educators. Very few of them would claim that their own flavours of educational work are without enjoyment, so why this disconnect?

Edutainment centres around the idea that all learning should be ‘fun’, and often incorporates technology including video games, films and radio, often with an interactive focus. It’s informal and revolves less around a central teacher figure and more around the engagement of the student with a narrative designed to impart information while it also engages with them emotionally. And while edutainment principles can be used to deliver any curriculum, it often seems to be focused on making science fun and enjoyable for otherwise unengaged students.

The benefits of this type of engagement are myriad: it addresses the challenge of catching and keeping peoples’ attention, it can make traditionally ‘boring’ or ‘difficult’ subjects more engaging, and it does so in a way that requires little in the way of preparation or resources for those delivering it (after initial development, of course). So what objections could possibly be lodged against it?

One of the main criticisms seems to boil down to the fact that in ‘making learning fun’ traditional learning is relegated to the ‘unfun’ sector; it becomes an obstacle to cover up or transmute into something palatable and shiny. While scientists and engineers hopefully enjoy their jobs and are passionate about their subjects it’s doubtful than any given individual would agree that their progression and learning had been all fun, all the time. Should we be encouraging young people to engage only with that which tells them a nice story or catches their eye with well-designed graphics, or should we level with them that STEM isn’t always fun? Facts need to be learned, abstract theories must be understood, and above all critical thinking and hard work and dedication are key to becoming successful learners and contributors to the wider sphere of understanding.

That being said, as educators we should be willing to see our methods evolve and change with time. Edutaintment activities can have value as long as they are vetted and evaluated for impact (both on attitude and uptake of content) properly. Some developing technologies are especially exciting – for example, http://www.teachwithportals.com/ is a free-for-schools initiave launched by distribution platform Steam, allowing teachers to use the Puzzle Maker and other templates to explore physics, maths, technology and even literacy topics while students play their way through derivatives of a wildly popular video game. This will not be a solution for all – there comes the problem of timetable integration, of whether school computers are capable of running the appropriate software, of the training of teachers to make use of yet another new tool – but it might work for some. And if it doesn’t, well, there’s always the classics:

Image

Electronic Display Basics

As the capabilities of computers grew, it became more and more important to have an easy means of interacting with them: providing inputs and seeing outputs. Paper was used to this end for many years, but another technology already existed in television and was co-opted for computer displays: the cathode ray tube.

Cathode ray tubes (usually abbreviated CRTs) are pretty similar in construction to the vacuum tube, the early logic switch component. Inside a glass tube which has had the air removed, we have a filament that can emit electrons when heated. But instead of the electron collector in the vacuum tube, there is a screen which immediately emits light when struck by electrons, a behavior called fluorescence.  Now, in order to create a two-dimensional image, you need a very focused electron beam, as well as some way to move the electron beam across the screen. You can use an electric field for both these tasks, to create a focused beam that can be scanned around the screen in order to light up different sections. But the larger the screen, the deeper the display has to be to give the beam enough room to scan the screen with reasonable resolution. This accounts for the large size of CRT monitors, which you either remember or occasionally spot in movies with computers between about 1980 and 2000.

The main display technology that displaced the CRT is based on a pretty amazing material called liquid crystals (and you may recognize the acronym LCD, which stands for liquid crystal display). The name comes from the fact that if you create a collection of small crystals that have one axis significantly longer than the others, this assembly of crystals acts like a hybrid of two phases of matter, liquids and elongated solid crystals. The crystals themselves can move around and are not fixed in one place, but the orientation of one crystal can affect those around it, and it’s possible to have liquid crystal assemblies with various degrees of ordering. The key to using liquid crystals in a display is controlling their orientation in order to control what orientations of light can pass through them. Using an external electric field, we can either align the crystals with the direction of motion for specific kinds of light passing through, which makes the liquid crystal cell transparent to that light, or we can orient them perpendicular to the direction of motion, which causes them to absorb the light so that none is transmitted. If you have an array of light emitting diodes, with each diode covered by a liquid crystal cell, then you’ve created a basic liquid crystal display.

Now, one thing to note about both the above technologies is that they are emissive: they generate light, so they can be read in a darkened room, used as a flashlight or as a source of eyestrain. But this also means that whenever an emissive display is switched on, it’s consuming power, and when it’s switched off there is nothing displayed. Contrast this with that ancient display technology, paper, which is reflective: it can only be read with enough ambient light bouncing off its surface, and once created it costs nothing to display its content continuously. Of course, you can’t rewrite what’s on paper very easily, so you end up using a lot of it. Additionally, the same natural resources we use for paper are in high demand for many other things, and transporting a library as a huge collection of papers is quite onerous, and for these and many other reasons the move to electronic displays is thus pretty strongly motivated. But, is there a way to make a reflective electronic display?

If you have ever used a Kindle, or any other device with an e-ink display, then you know the answer is yes. The surface of an e-ink display has millions of tiny cells, which contain miniscule beads that are either white and positively charged, or black and negatively charged. An electric field can be applied to each cell, bringing either white or black beads to its surface. Light coming into the surface from outside sources is then either reflected off the white beads or absorbed by the black beads, creating a rewritable reflective display.

Of course, paper is still the cheapest display technology by far.  But in some cases, for example when you want to make small corrections in content, display something outside, ship information somewhere, or carry an entire library on something the size of a pad of paper, newer inventions are getting the edge on paper.

What is a memristor?

One of the most hyped devices of the past few years has been the memristor, and in comparison to other circuit elements I think you’ll agree it’s pretty weird, and interesting!

The existence of the memristor was first predicted in 1971 by Leon Chua. The behavior of basic circuit elements had long been described mathematically, with each circuit element having a given relationship between two of four quantities: charge (q), current (I), voltage (V), and flux (φ). Chua noticed that the mathematical equations could be tabulated and had a symmetry of sorts, but that one equation seemed to be missing, for a device that related current to a change in magnetic flux.  The behavior of such a device would change depending on how much current had passed through it, and for this reason Chua called it a ‘memristor’, a contraction of ‘memory’ and ‘resistor’. You can see the mathematical relationships represented below as the edges of a tetrahedron. Resistor behavior is quantified by the resistance R and capacitor behavior is quantified by the capacitance C, in the two equations near the top. And on either side, we have relations for the inductance L and the memristance M. It’s not crucial to understand these equations intimately, just to see that they have a certain symmetry and completeness to them as a set of relations between these four key variables. Five of these relationships had been experimentally observed in devices, and mathematically suggested the sixth, the memristance equation on the right. But having the equation doesn’t tell you how to make a device that will exhibit that behavior!

Chua’s initial proposals to physically create a memristor used external power to store the remembered information, making the component active rather than passive. However, in 2008 a real-world memristor was created using a nanoscale film with embedded free charge that could be moved by applying an electric field that exerts a force on the charge. How much of the film contains the extra charge determines how much of the device has high resistance and low resistance, which causes the total resistance to depend on how much current has passed through.

This isn’t the only implementation of a memristor available, because as many researchers realized once the first results were announced, memory of past measurements is a common nanoscale feature. Current flow can often cause small changes in materials, but while these changes may not be noticeable in the properties of a bulk material, when the material has very small thickness or feature size, the changes can affect material properties in a measurable way. Since this constitutes a form of memory that lasts for a significant amount of time, and there is a large market for non-volatile electronic memory for computers, the commercial interest in these devices has been considerable. HP expects to have their version memristor-based computer memory on the market by 2014, and it remains to be seen what other novel electronics may come from the memristor.

Boolean Logic

We talked last time about the conveniences in implementation that can be had by using digital electronic signals for calculations rather than analog signals. But I want to get a little bit more into what the digital system means for how calculations are performed, which means talking about what Boolean algebra and logic are.

But first let’s talk about algebra. Fundamentally, algebra is taking a set of elements and applying operations to those elements. A straightforward example would be the arithmetic operations of addition, subtraction, multiplication, and division on all integers. If you take algebra in school, you study many more complex functions that can operate on various numbers, and you get into the symbolic representation of numbers (i.e. solving for x, where x is a number whose value you want to know).  So algebra is essentially an abstract framework for mathematics, one that allows you to see and describe rules and patterns more easily.

But it’s also possible to define specific sets of numbers or elements to work with, or to define a limited number of operations, and construct specialized algebras. That’s the basis of abstract algebra, a really cool math topic that I strongly suggest learning more about! And Boolean algebra is essentially a form of abstract algebra that developed in parallel to abstract algebra as a formal field. It was developed by George Boole, a mathematician and logician who wanted to establish an algebraic way of performing logical operations. Boole examined the operations that were possible for a set of only two values, the so-called ‘truth values’ of 0 and 1 (or false and true). His 1854 book examined truth tables for these operators, and we’ll look at a couple to get a feel for them.

x y xANDy
0 0 0
0 1 0
1 0 0
1 1 1

Above is the truth table for an operation called AND. Given inputs x and y, which can have the values 0 or 1, the output xANDy is 1 only if both x and y are 1. So the left two columns list the possible input values for x and y, and the rightmost column shows the output xANDy for each of those combinations. For example, the last line is a way of displaying the equation (1)AND(1) = 1.

x y xORy
0 0 0
0 1 1
1 0 1
1 1 1

This truth table is for OR, an operation that returns 1 if either or both inputs have the value 1.

The real technological relevance of Boolean algebra came when Claude Shannon realized in 1937 that it could be used to analyze and predict the operation of electrical switches. This can be used to develop a combinatorial logic to analyze very complex circuits whose output depends solely on their inputs, which is the main theoretical tool behind digital electronic circuits today!

What are ‘digital’ electronics?

Why is the present moment called “the digital age”? What does it mean to have digital electronics, as opposed to analog? What’s so useful about digital?

In present-day electronics, the bulk of the calculations and computing are done on digital circuits, hence the “digital age” moniker. To get into what this means, we have to take a look back at the early development of calculating machines that used electronic signals. There are lots of components you can use in an electronic circuit, and with some basic resistors and capacitors you can start to build circuits that add one voltage to another, or perform other simple mathematical calculations. Early electronic calculators were made to mimic physical calculators and controllers that operated using gears or other mechanisms, such as the Antikythera mechanism, astrolabes, or even slide rules. Most physical calculators are analog, meaning that they apply an operation, like addition, to a set of numbers that can be anywhere along a continuous spectrum. Adding 3 to 6 and getting 9 is an analog operation. But it turns out that analog electronics have a reliability problem:  any variation in the output, which could be due to changes in the circuit’s environment, degradation in the circuit components, or leaking of current out of the circuit, will be indistinguishable from the actual signal. So if I add 3 volts to 6 volts, but I lose half a volt, I’ll get 8.5 volts and have no way of knowing whether that’s the answer I was supposed to get or not. For some applications, this isn’t an issue if the person using the electronics is able to calibrate the circuit before operation, or if you have some method of double-checking your result. But if you want to build consumer electronics, where the user is not an expert, or electronics that can reliably operate without adjustment somewhere inaccessible, reliability is a huge issue.

But what if, instead of having a continuum of possible values, we instead use a small number of discrete values? This is called digital computation after the digits of the hand, which are frequently used to count the integers from 1 to 10. Digital computing deals in only two states: on and off, also known as true and false, or 0 and 1. It allows us to hide all the messiness of the analog world by assigning on and off to a wide range of voltage values: for example, we could have 0-2V mean off, and 3-5 V mean on. Now we have a very large margin of error in our voltage ranges, so a little leakage or signal degradation will not affect what we read as the circuit output. The graph below shows how an oscillating analog signal would look as a digital signal.

Physically, there are several ways to build a system that can switch between on and off. The first one that gained a foothold in technological use was the vacuum tube, which is somewhat similar to an incandescent light bulb. In a vacuum tube, a filament can be heated to emit electrons, which are collected at an electrode nearby. The electrons are passed through a region of empty vacuum to get from the filament to the electrode, hence the name, and a control grid induces an electric field that can change how much current passes through the vacuum tube. Early computers had to have one of these vacuum tubes for each switch, hence the massive size of the first computers which easily filled a room or several rooms. If you read science fiction from the vacuum tube era, computers play a big role, quite literally since people assumed that any computer that had the power to rival the human brain would have to be large enough to fill huge underground caverns.

The development of silicon as a material for electronics changed everything. Silicon can be turned on or off as a conductor of electrons simply by applying a voltage, and it can be manufactured so that the switch size is much smaller than the smallest vacuum tube. The scale of the smallest features you can make from silicon has been decreasing for decades, which means we are building computational chips with more and more switches in a given area.

But, one tricky thing about moving to digital logic is that the best math for these calculations is not much like our everyday math. Fortunately, a construction for doing digital calculations was developed in the mid-nineteenth century by George Boole. More on that next time!