Over a million developers have joined DZone.

Memristors Up Their Game: More Storage per Cell

DZone's Guide to

Memristors Up Their Game: More Storage per Cell

As the renowned author George McFly once said, 'You are my density.' Heavy-duty computing in massive data centers will have a bit more performance.

· AI Zone ·
Free Resource

Insight for I&O leaders on deploying AIOps platforms to enhance performance monitoring today. Read the Guide.

I've been following memristor technology for a while and even wrote a few articles in the past year about them. They are on the bleeding edge today and will be mainstream sometime in 2018-2019. While I was at AI World in Boston in December, the Dell computer technical representatives told me that crossbar memristors that work in conjunction with conventional DDR RAM chips will be available soon. This should lead to silliness such as many terabytes of RAM on the engineering station under your desk (I need some of that). But these initial memristor-based memories are still binary devices. True, they are very much smaller and they need no power to retain data, and they are incredibly simple devices (just a grid of wires crossing... really, that's about it). But these early devices are being used to only store binary: one or zero.


Image credit

Something new is on the horizon. It's been anticipated at the theoretical level ever since this first generation of memristors was demonstrated.

There are a few previous articles I wrote about memristors: Part 1Part 2.

A little refresher about this technology: Memristors inherently store an analog signal. They record how much current has flowed in one direction through the junction. They increase their resistance in proportion to the total amount of current that has flowed. When you reverse the flow of the current, the resistance decreases in proportion to the reverse flow. You can think of it as pouring liquid into or out of a beaker: whenever you stop pouring, the level stays the same without any further use of energy. Those of you who are familiar with analog computers will recognize this as a simple integrator. This means that memristors can theoretically store many different levels at each location.

Themis Prodromakis, Professor of Nanotechnology and EPSRC Fellow at the University of Southampton, says:

"Memristors are a key enabling technology for next-generation chips, which need to be highly reconfigurable yet affordable, scalable and energy-efficient."

What the researchers at the University of Southhampton have developed is a way to enhance the precision and reproducibility of level-setting to allow as many as 256 distinct levels per memristor junction. This means that every memristor junction can store one byte of information instead of only one bit (remember: one byte represents a number between 0 and 255). On the purely digital side, this provides an 8x increase in the data density for a given chip. Memristors have already been demonstrated that are tens of times smaller than current DDR RAM cells, so making that eight times denser is very good news indeed.

But this also creates some grand possibilities for circuitry such as neuromorphic chips. Several chip manufacturers are working on dedicated circuitry for the implementation of neural networks. We're all familiar with the simulation of neural nets and the fantastic speedups we can get by using Nvidia GPGPUs, but this is different. Neuromorphic circuitry more closely models what a neuron does. The actual circuitry has multiple inputs with adjustable weights a dedicated summation unit for those weights and an output that can be logically directed to another input: that's what a neural net is.

If memristors could be used to change and remember the input weights without having to store them as numbers somewhere in RAM and recall them to use in a simulation of the neuron, then all sorts of advantages accrue. Aside from the speed gains to be had by computing results with dedicated processing only nanometers away from the data storage, another advantage is the power savings. In most machine learning scenarios, most of the electrical power (and resultant heat) is used to move data between storage and computational circuitry. Just removing the necessity to shuffle binary numbers representing millions of ever-changing net connection weights could provide an order of magnitude (or more) in power savings. I predict you will see some amazing stories in the next year or two centered around power savings approaching this magnitude.

But there are some other fun (and efficient) things you might be able to do with a memory location that stored one byte of information as a monotonic series of 256 levels. For example, ASCII data is sorted everywhere and often in almost every computer program. Imagine if instead of reading the eight bits and putting it in the register in determining if it was greater than or less than another eight bits in another register, that all you had to do was compare the two cells with a very simple comparator. It would have a simple output of greater than/equal to/less than. Similarly, albeit a little more complex (but not much), large integers could be dealt with in a byte-wise fashion. Optimally, 64-bit integers could be dealt with as eight byte-size comparisons.

There's a lot of potential for this new technology, and I can hardly wait. Stay tuned!

TrueSight is an AIOps platform, powered by machine learning and analytics, that elevates IT operations to address multi-cloud complexity and the speed of digital transformation.

ai ,neural network ,ram ,machine learning

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}