How to build an electronic neuron

by John Iovine

ARTIFICIAL NEURAL NETWORKS ARE electronic systems that function and learn according to biological models of the human brain. Typically such networks are implemented in computers as programs, coprocessors or operating systems. By mimicking the vast interconnections of neurons, researchers hope to mirror the way the brain learns, stores knowledge and responds to various injuries. The networks might someday even be a basis for future intelligent thinking machines [see "Will Robots Inherit the Earth?" by Marvin Minsky, page 108]. They may also help to surmount the barriers faced by standard programming, which fails to perform in real time some tasks the human mind considers simple, such as recognizing speech and identifying images.

Figure 1: HARD-WIRED NEURAL NETWORK tracks the sun by keeping two photosensors equally lit. A motor that runs too quickly may need to be coupled to a larger gear (inset).

In an artificial neural network, objects called units represent the bodies of neurons. The units are connected by links, which replace the dendrites and axons The links adjust the output strength of the units, mimicking the different strengths of the connections between synapses, and transmit the signal to other units. Each unit, like a real neuron, fires only if all the input signals routed to it exceed some threshold value.

The primary advantage of such an architecture is that the network can learn. Specifically, it can adjust the strength, or weight, of the links between units. In so doing, the links modify the output from one unit before feeding the signal to the next unit. Some links get stronger; others become weaker. To teach a network, researchers present so-called training patterns to the program, which modify the weight of the links. In effect, the training alters the firing pattern of the network [see "How Neural Networks Learn from Experience," by Geoffrey E. Hinton; SCIENTIFIC AMERICAN, September 1992].

What I describe here is the construction of a simple, hard-wired neural network. Using a motor, this circuit follows the motion of a light source (such as the sun). All the parts are readily available from electronic hobby shops such as Radio Shack.

The operation of the circuit is simple, particularly because it relies on only one neuron. The neuron is a type 741 operational amplifier (op-amp), a common integrated circuit. Be sure the op-amp comes with a pin diagram, which identifies the connection points on the op-amp by number.

Two cadmium sulfide photocells act as neural sensors, providing input to the op-amp. The resistance of these components, which are about the size of the hp of your little finger, changes in proportion to the intensity of light. With epoxy or rubber cement, glue the photocells a couple of centimeters apart on a thin, plastic board that is approximately three centimeters wide by five centimeters long. Then affix a similarly sized piece of plastic between the cells so that the assembly assumes an inverted T shape. This piece must be opaque; I painted mine black.

The rest of the circuit should be built on a stationary surface a few centimeters from the photosensor assembly. A breadboard-a perforated sheet of plastic that holds electronic components-will help keep the connections tidy.

You will also need a power supply: a couple of nine-volt batteries will do the job. Connect the batteries together by wiring the positive terminal of one battery to the negative end of the other (in effect, grounding them). This configuration leaves open one terminal on each battery, thereby creating a bipolar power supply. Four components need to draw electricity: the two photocells, the op-amp and the motor. Connect these parts in parallel to the batteries. For convenience, you may wish to wire in an on-off switch.

On the schematic [see illustration below], you will notice several resistors. They act to stabilize the amount of current that flows through the circuit. A 10-kilo-ohm potentiometer-basically, a variable resistor-is connected to one of the photocells. This component regulates the voltage received by the op-amp-that is, it adjusts the weight of the link.

Figure 2: CIRCUIT SCHEMATIC of the neural network shows the necessary connections. The type 741 operational amplifier acts as the neuron.

Hook up the photosensors so that they are connected to pin numbers 2 and 3 of the op-amp. The power supply goes to pins 4 and 7. The output signal leaves the op-amp at pin number 6 and travels to two transistors. One, labeled Q1 on the schematic, is a so-called NPN type; the other, Q2, is a PNP type. These transistors activate the motor and, in some sense, can be looked on as artificial motor neurons.

The motor is a low-voltage, direct-current type. The one I used was a 1 2-volt, one-revolution-per-minute (RPM) type. If your motor's RPM is too high, you will need to couple a large gear to it to reduce the speed [see Figure 1]. The motor should have a shaft about six centimeters long. To extend mine, I inserted a stiff plastic tube over the end of the motor shaft.

To train the circuit, expose both photocells to equal levels of light. A lamp placed directly above the sensors should suffice. Adjust the potentiometer until the motor stops. This process alters the weight of the signal, so that when both photosensors receive equal illumination, the op-amp generates no voltage. Under uneven lighting conditions, the output of the op-amp takes on either a positive voltage (activating the NPN transistor) or a negative voltage (triggering the PNP transistor). The particular transistor activated depends on which sensor receives the least amount of light.

To test the circuit, cover one photocell; the motor should begin rotating. R should stop once you remove the cover. Then block the other photocell. The motor should begin rotating in the opposite direction.

Now glue the photosensor assembly to the shaft of the motor so that the photocells face up. Illuminate the sensors from an angle. If the motor rotates in the wrong direction (that is, away from the light), reverse the power wires to the motor. You may have to cut down on the amount of light reaching the photocells; full sun will easily saturate the sensors. Just cover the photocells with a colored, translucent piece of plastic.

As long as the sun is directly aligned with the two photocells, exposing them to equal amounts of light, the inputs to the neuron balance out. As the sun moves across the sky, the alignment is thrown off, making one input stronger than the other. The op-amp neuron activates the motor, realigning the photocells. Notice that this neural circuit tracks a light source without relying on any equations or programming code.

The circuit has immediate practical applications in the field of solar energy. For example, it can be hooked up to solar-powered cells, furnaces or water heaters to obtain the maximum amount of light input.

You can also modify the device in a number of ways. For instance, you can hook up a second network so that you can track a light source that moves vertically as well as horizontally. ambitious amateurs might try replacing the photocells with other types of sensors, such as radio antennae. Then you can track radio-emitting satellites across the sky. Photocells sensitive to infrared energy could be used to track heat sources-the basis for some types of military targeting. Plenty of other modifications are possible, but don't expect your neuron ever to achieve consciousness.

More intricate examples of the circuit described in the article demand fairly complicated hard-wiring. Complex variations are therefore perhaps best constructed as software. I wrote a program in BASIC that emulates an early neural network-the Perceptron, created in 1957 by Frank Rosenblatt of Cornell University. The Perceptron learns to identify shapes and letters. This software, as well as a few other artificial neural network programs, is available on an IBMcompatible disk for $9.95, plus $5.00 for postage and handling, from Images Company, R O. Box 140742, Staten Island, NY 10314, (718) 698-8305.

Bibliography

THE THREE-POUND UNIVERSE. Judith Hooper and Dick Teresi. Macmillan Publishing, 1986.

FOUNDATIONS OF NEURAL NETWORKS MONDO PRIMER. Tarun Khanna. Addison-Wesley Publishing, 1990.

NEUROCOMPUTING: THE TECHNOLOGY OF NON-ALGORITHMIC INFORMATION PROCESSING. Robert Hecht-Nielsen. Addison-Wesley Publishing, 1990.