Keywords: Neuronal Model; Non-Von Neumann Architecture; Organic Memristors; Massive Multi-Thread Processor
Mini Review
The time has come for computing inspired by the structure of
the brain. Algorithms that use neural networks and deep learning,
imitating some aspects of the human brain, allow digital computers
to reach incredible heights in language translation, finding elusive
patterns in huge volumes of data. But while engineers continue to
actively develop this computational strategy, capable of much, the
energy efficiency of digital computing is reaching its limit. Our data
centers and supercomputers already consume megawatts - 2% of
all electricity consumed in the USA goes to data centers. And the
human brain is well worth 20 watts, and this is a small fraction
of the energy contained in the daily food consumed. If we want
to improve computing systems, we need to make computers look
like brains [1,2]. A surge of interest in neuromorphic technologies
is associated with this idea, promising to move computers beyond
simple neural networks, towards circuits that work as neurons and
synapses. The development of physical circuits similar to the brain
is already quite well developed. The work done in my laboratory
and other institutions around the world over the past 35 years
has led to the creation of artificial nerve components, similar to
synapses and dendrites, which react and generate electrical signals
in much the same way as real ones.
And yet the fact that we can come up with such a system, says
that we did not have long before the advent of smaller-scale chips
suitable for use in portable and wearable electronics. Such gadgets
will consume little energy, so a neuromorphic chip with high energy
efficiency - even if it takes on only part of the calculations, say, signal
processing - can become revolutionary. Existing features, such
as speech recognition, can work in noisy environments. You can
even imagine the smartphones of the future carrying out real-time
speech translation in a conversation between two people. Think
about this: over the 40 years since the advent of integrated circuits
for signal processing, Moore’s law has improved their energy
efficiency by about 1,000 times. Very brain-like neuromorphic
chips can easily surpass these improvements, reducing energy
consumption by another 100 million times. As a result, calculations
that previously needed a data center fit in the palm of your hand.
In an ideal machine that approaches the brain, you will need to
recreate analogues of all the main functional components of the
brain: synapses that connect neurons and allow them to receive
and respond to signals; dendrites combining and conducting local
calculations based on incoming signals; the nucleus, or soma, is the
region of each neutron, combining the input from dendrites and
transmitting the output to the axon [3].
The simplest versions of these basic components are already
implemented in silicon. This work began with the same metal oxide
semiconductor, or MOSFET, whose billions of copies are used to build
logic circuits in modern digital processors. These devices have a lot
in common with neurons [4]. Neurons work by means of voltage controlled barriers, and their electrical and chemical activity
depends mainly on the channels in which ions move between the
inner and outer spaces of the cell. This is a smooth, analog process
in which there is a constant accumulation or reduction of the signal,
instead of simple on / off operations. MOSFETs are also voltage
controlled and operate by the movements of individual charge
units. And when the MOSFETs operate in the “subthreshold” mode,
without reaching the voltage threshold, switching the on and off
modes, the amount of current flowing through the device is very
small - less than one thousandth of the current that can be found in
typical switches or digital gates.
New Principles of Developing Synthetic Neuronal Network
The idea that the physics of subthreshold transistors can be
used to create brain-like circuits was expressed by Carver Mead
from Caltech, who contributed to the revolution in the field of superlarge
integrated circuits in the 1970s. Mil pointed out that chip
designers did not use many interesting aspects of their behavior,
using transistors exclusively for digital logic. This process, as he
wrote in 1990, is similar to the fact that “all the beautiful physics
that exist in transistors are crushed to zeros and ones, and then the
AND and OR gates are painfully built on this basis to reinvent the
multiplication.” A more “physical” or physics-based computer could
do more calculations per unit of energy than a conventional digital
one. Mead predicted that such a computer would take up less space.
In subsequent years, engineers of neuromorphic systems created
all the basic blocks of the brain from silicon with high biological
accuracy.
Dendrites, axon, and neuron soma can be made from standard
transistors and other elements. For example, in 2005, Ethan
Farquhar and I created a neural circuit from a set of six MOSFETs
and a bunch of capacitors. Our model emitted electrically impulses,
very similar to those that emit catfish neuron squid - a longstanding
object of experiments. Moreover, our scheme achieved
such indicators with current and energy consumption levels close to
those existing in the squid brain. If we wanted to use analog circuits
to model the equations derived by neuroscientists to describe
this behavior, we would have to use 10 times more transistors.
Performing such calculations on a digital computer would require
even more space. Synapses and catfish: a floating-gate transistor
capable of storing various amounts of charge can be used to create
a coordinate array of artificial synapses (bottom left). Electronic
versions of other components of a neuron, such as a catfish, can be
made from standard transistors and other components [5].
Synapses are a little harder to emulate. A device behaving like a
synapse should be able to remember what state it is in, respond in
a certain way to the incoming signal, and adapt its responses over
time. There are several potential approaches to creating synapses.
The most developed of them is a Single-Transistor Learning Synapse
(STLS) synapse, which my colleagues in Kaltech and I worked on
in the 1990s, when I was a graduate student at Mead. We first
introduced STLS in 1994, and it has become an important tool for
engineers creating modern analog circuits - for example, physical
neural networks. In neural networks, each node in the network has
a weight associated with it, and these weights determine how data
from different nodes is combined. STLS was the first device capable
of containing a set of different weights and reprogramming on the
fly. In addition, the device is non-volatile, that is, it remembers its
state, even when not in use - this circumstance significantly reduces
the need for energy.
STLS is a type of floating gate transistor, a device used to
create cells in flash memory. In a conventional MOSFET, the gate
controls the current passing through the channel. The floating gate
transistor has a second gate, between the electric gate and the
channel. This shutter is not directly connected to earth or any other
component. Thanks to this electrical insulation, reinforced with
high-quality silicon insulators, the charge is stored in a floating
gate for a long time. This shutter is able to take a different amount
of charge, and therefore can give an electrical response at many
levels - and this is necessary to create an artificial synapse that can
vary its response to the stimulus. My colleagues and I used STLS to
demonstrate the first coordinate network, a computational model
that is popular with nanodevice researchers. In a two-dimensional
array, devices are located at the intersection of input lines going
from top to bottom and output lines going from left to right.
Such a configuration is useful in that it allows you to program the
connecting force of each “synapse” separately, without interfering
with other elements of the array.
Thanks, in particular to the recent DARPA program called
Synapse, in the field of engineering neuromorphing there has been
a surge in research on artificial synapses created from nanodevices
such as memristors, resistive memory and phase-state memory,
as well as floating-gate devices. But these new artificial synapses
will be hard to improve based on arrays with a floating shutter
twenty years ago. Memristors and other types of new memory
are difficult to program. The architecture of some of them is such
that it is quite difficult to access a specific device in a coordinate
array. Others require a dedicated transistor for programming,
which significantly increases their size. Since the memory with a
floating shutter can be programmed for a wide range of values, it
is easier to adjust to compensate for production deviations from
device to device compared to other nanodevices. Several research
groups that studied neuromorphic devices tried to incorporate
nanodevices into their designs and as a result began to use devices
with a floating shutter. And how do we combine all these brainlike
components? In the human brain, neurons and synapses are
intertwined. Developers of neuromorphic chips must also choose
an integrated approach with the placement of all components on
one chip. But in many laboratories, you will not find this: to make
it easier to work with research projects, separate base units are in
different places.
Synapses can be placed in an array outside the chip. Connections can go through another chip Brain memory elements, such as synapse power, are mixed with signal-transmitting components. And the “wires” of the brain - dendrites and axons that transmit incoming signals and outgoing impulses - are usually short in comparison with the size of the brain, and they do not need a lot of energy to maintain the signal. From anatomy, we know that more than 90% of neurons connect to only 1,000 neighboring ones. Another big question for the creators of brain-like chips and computers is the algorithms that will have to work on them. Even a slightly brainlike system can give a big advantage over a conventional digital one. For example, in 2004, my group used floating-gate devices to perform multiplication in signal processing, and it took 1,000 times less energy and 100 times less space than a digital system. Over the years, researchers have successfully demonstrated neuromorphic approaches to other types of computing for signal processing. But the brain is still 100,000 times more efficient than these systems. This is because although our current neuromorphic technologies take advantage of the neuron-like physics of transistors, they don’t use algorithms like the ones that the brain uses to do its job.
Conclusion
Today we are just starting to discover these physical algorithms - processes that can enable brain-like chips to work with efficiency close to brain. Four years ago, my group used silicon catfish, synapses, and dendrites to run a word-searching algorithm that recognizes words in audio recordings. This algorithm showed a thousand-fold improvement in energy efficiency compared to analog signal processing. As a result, by reducing the voltage applied to the chips and using smaller transistors, researchers must create chips comparable in performance to the brain in many types of computing. When I started research on neuromorphism, everyone believed that the development of systems similar to the brain would provide us with amazing opportunities. Indeed, now entire industries are built around AI and in-depth learning, and these applications promise to completely transform our mobile devices, financial institutions and the interaction of people in public places. And yet, these applications rely very little on our knowledge of brain function. Over the next 30 years, we will no doubt be able to see how this knowledge is increasingly used. We already have many basic hardware blocks needed to convert neurobiology into a computer. But we need to understand even better how this equipment should work - and which computational schemes will give the best results.
References
- Hammarlund P, Ekeberg O (1998) Large neural network simulations on multiple hardware platforms. Journal of Computational Neuroscience 5(4): 443-459.
- Shahaf G, Eytan D, Gal A, Kermany E, Lyakhov V, et al. (2008) Orderbased representation in random networks of cortical neurons. PLoS computational biology 4(11): e1000228.
- Bakkum DJ, Guy Ben Ary, Phil Gamblen, Thomas B De Marse, Steve M Potter, et al. (2004) Removing some “A” from AI: Embodied Cultured Networks. Microscope pp: 130-145.
- Abeles M (1991) Corticonics: Neural Circuits of the Cerebral Cortex. Cambridge University Press P: 296.
- Edelman GM (1993) Neural Darwinism: selection and reentrant signaling in higher brain function. Neuron 10(2): 115-125.