Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Neuromorphic: Another Hardware Approach to AI

DZone's Guide to

Neuromorphic: Another Hardware Approach to AI

Yet another kind of computing hardware? Sort of digital yet a bit analog? What can it do for us? Is it like a perfect free throw... all net?

· AI Zone ·
Free Resource

Insight for I&O leaders on deploying AIOps platforms to enhance performance monitoring today. Read the Guide.

Neuromorphic computing is an emerging technology that has actually been emerging for quite some time. It was first proposed in the 1980s by Carver Mead to take advantage of some of the technologies emerging in very large-scale integration (VLSI). The idea was to put a large array of analog circuits on a single chip where all of the individual analog blocks were designed to leverage our knowledge of:

  1. Biologically based neural architectures in living nervous systems.

  2. New machine learning algorithms.

  3. Physical microelectronics.

One of the driving forces in this computing domain is because the end is in sight for the dramatic exponential growth of single processor von Neuman computer architectures. (Of course, it is the driving force for parallelism, quantum computing, etc. Who doesn't stay up nights worrying about the day when we can't compute any faster?!)

A large set of challenging problems still remain.

Neuromorphic processes at significant scales can be simulated on very large (petascale) computer arrays. But several groups have been working on dedicated neuromorphic hardware designed on wafer-scale analog circuitry. A European project called Artificial Brains has a subproject called BrainScaleS researched and developed this technology on 20 cm diameter silicon wafers. Each wafer contains 384 regions each of which implements 128,000 synapses and 512 neurons. All added up, this gives each wafer about 200,000 neurons and about 50 million synapses.

To give some perspective, a mouse brain contains about 75 million neurons. But the neuromorphic wafer operates at some milliseconds speeds so it can think at approximately 10,000 times the rate of its organic equivalent. (Just for reference, the human brain has approximately 85 billion neurons.) BrainScaleS is now in the hardware commissioning phase and will be laying the groundwork for some fundamental principles and techniques in designing this hardware.

Image title

For those of you interested in diving into the deep end of the neuromorphic pool, there is an in-depth (and interesting) HBP Neuromorphic Computing Platform Guidebook. It is a living document (last updated 2/27/2018!) that explains how it all works and how to interact with it via the Python API. Currently, time on the neuromorphic server is requestable only by HBP members. But I wouldn't be surprised if, in the future, some sort of access might become available to a larger pool of experimenters.

The IBM Machine Intelligence project (as opposed to machine learning) focused more research energy on experimenting with algorithms potentially realizable in hardware by designing highly specialized numerical processing Excelerator boards optimized for computing neural models.

IBM Escape 9000 Board

Above is one of the compute boards from the ESCAPE 9000 Neural Computer. It contains 1,296 Field Programmable Gate Arrays (FPGA) plus 25,92 ARM cores and 1 TB of high bandwidth RAM.

Other researchers are exploring the power of using memristors analog memory (resistive RAM, or RRAM) characteristics to store the weights of neural inputs. Remember, the advantages of memristors are that they use no power until accessed, need no power to retain data, can be adjusted to any arbitrary resistive value, and use very little chip real estate — ideal for this application.

To get a better feel for the impact of massive memory requirements in conjunction with massive computational loads, the following diagram illustrates why our purely digital, conventional compute farm approach is doomed to fail.

Image title

One of the interesting and important aspects of today's multilayered neural computing is that it can represent operations on highly nonlinear feature spaces. If all the things we were trying to learn could be represented by linear models, then we could solve all those problems with simple multi-linear regression. But the real world is always more complicated than that.

Image title

DARPA has a program called SyNAPSE that is focused on creating this new domain of cognitive computing hardware. And this program directly led to the IBM TrueNorth CMOS device.

The core concept behind all these simulations of brains (vast collections of neurons) is the Spiking Neuron in biology, and in the case of ML, it's spiking neural networks.

As the name implies, the spiking neuron reacts to whatever input spikes (electrical pulses) it receives. Over a certain number of input spikes in a given time window, it will output one (or more) spikes. That's pretty much it. Its behavior is "programmed" by adjusting the weights that determine how many spikes have been received and from how many inputs (synapses). So the neuron responds to the rate over a span of inputs. If this all seems a bit like digital (pulses) and analog (rates and weights), well, it is.

This hardware is still in its infancy. And even the researchers are willing to concede that they may be trying to "build an airplane by simulating a bird." There are even deeper concerns (that they have the nerve to admit) that we aren't completely certain that all the magic that happens in a brain when it "thinks" is done by spiking neurons.

I am optimistic even though we still have a lot to discover on this path.

To be continued. Stay tuned.

TrueSight is an AIOps platform, powered by machine learning and analytics, that elevates IT operations to address multi-cloud complexity and the speed of digital transformation.

Topics:
ai ,neuromorphic ,machine learning ,neural network ,analog

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}