Analog Computing: The AI/ML Redux
Analog Computing: The AI/ML Redux
In ML, a huge part of the computational cost is converging on an approximate solution. Analog computing shows promise for finding a good guess with less time and energy.
Join the DZone community and get the full member experience.Join For Free
Insight for I&O leaders on deploying AIOps platforms to enhance performance monitoring today. Read the Guide.
Well into the 1970s, analog computers did much of the heavy number-crunching for science and engineering. They were key to the aerospace industry and solving problems about fluid flow, vibration analysis, thermal simulation, and more. Analog computers provided most of the computation for getting to the moon and designing spy planes flying at three times the speed of sound. They were also used in many other fields, i.e. biology and chemistry. Back in the day, I programmed an applied dynamics analog computer that was originally intended for aircraft flight simulation to do complex simulations of pulmonary ventilation and perfusion in the humans.
Analog computers do computations in a fundamentally different way than digital computers. We "read" their answers by measuring a physical property of a mechanical, fluidic, or electronic analog of the problem we're solving. A very (very) simple analog for addition is a measuring cup. You can measure the total volume of several arbitrarily shaped containers by filling them with liquid and pouring them all into the measuring cup. Then, reading the graduations on the side of the cup, we "calculate" the total volume: the sum of the input volumes. Analog computers are pretty much that simple. Or, for a more challenging (interesting) problem, suppose you need the integral of a variable flow rate. All you need to do is pipe that variable flow into the measuring cup over the time interval that you wish to integrate and just like that you can read the integral of the volume of the side of the measuring cup. You can even do the integral of several different functions by flowing them into the same cup (dynamically adding functions). Calculus with water! FYI, for electronic analog computers, it's almost the same thing: instead of water, it's electrons, and instead of a measuring cup, it's a capacitor, and instead of reading markings on the side of the cup, we measure the voltage across the capacitor. It's really pretty simple.
Many analog computers dating around the year 1900 were systems of gears and cams that represented values and functions. In fact, from 1912 until 1965, such a mechanical computer was used to compute ocean tide timetables for the U.S. Coast and Geodetic Survey. They called it Old Brass Brains!
The gears were set up to turn cams that represented the position of the sun and the moon with respect to the earth and an output dial represented the elevation of the tide. For computers like these, you literally "cranked out" the answer!
About the same time that vacuum tubes were being used for digital computers, they were also being used for analog computers. It was readily apparent that the domains that these two types of computers were suited for were quite different things. Digital was well-suited for things like tallying up bank records (where you need to account for each penny) or if you required computations that needed six or ten or more decimal place accuracy. But in the age of vacuum tubes, it might take quite some time (and heat and electricity) to compute a single answer. But analog was the ideal choice if the accuracy required was between .1% and 1% error (the sort of error you would get with a slide rule) and as an additional free benefit, you could get instantaneous answers that changed as quickly as you altered the input values. Analog computers would simulate the dynamic behavior of your problem at no extra cost!
So why is there a resurgent interest in analog computing today? Clearly, we can do all of the dynamic simulations that we want by using discrete digital computation. Since much of our computation is done in the cloud, we don't really worry about how big the server farm is (or the server farm of GPGPUs that it's partnered with) or the energy requirements or the waste heat removal. Note: Google has at least one server farm built next to a river just because of the cooling water resource. But if we need to simulate our problem at microsecond intervals, then we must recalculate the entire suite of simulation equations one million times for every second of simulation — whereas the analog solution would simply evolve over that second and we could record any or all of the internal computational results with very little additional power consumption. Of course, we still have the nagging issue of only having "slide rule-like" accuracy.
One distinct characteristic/advantage of analog computing is that all of the computational "blocks" (what analog computer programmers call the hardware units that to integration, addition, multiplication, function lookup, etc.) are inherently parallel. If you are doing an integral in your solution, it will require a small amount of power and a small amount of time (i.e. 1 µs) for the answer to stabilize. If you are doing 100 integrals in your solution, your power requirement will scale linearly, but all the "blocks" will compute and stabilize at the same time. Everything happens in parallel ... automatically.
Another interesting and useful characteristic/advantage of analog computers is that it is trivial to connect them to input sensors. Sensors are inherently analog, producing a voltage or current in proportion to what they're measuring, so this can be directly applied to the inputs. And often, on the output side, devices are being controlled, such as motors or other actuators, which naturally accept a voltage or current for their input.
As an exercise, you might imagine a small robotic sailboat that has sensors for wind speed, wind direction, water speed, compass heading, etc. (all analog) connected to an analog computer which outputs analog voltages to control the angle of the sail, the angle of the rudder, etc. — all powered by a power source as simple as a garden solar light.
Since 2005, a group of researchers has been working on building analog computers on the chip using conventional CMOS chip technology. Their initial work is in a paper titled: a VLSI analog computer/digital computer accelerator. They did an update on the integration scale and performance of this technology two years ago titled Energy-Efficient Hybrid Analog/Digital Approximate Computation in Continuous Time. And recently, they've announced further work on purely analog computing as well as some interesting newer approaches to continuous time digital/analog hybrid computing. Yannis Tsividis, Edwin Howard Armstrong Professor of Electrical Engineering Columbia University is the driving force behind all of this work.
Photo: Randi Klett
The researchers used modern fabrication technology to pack a powerful analog computer into this tiny package.
Hopefully, in the future, this analog approach will be able to model neurons composed of lots of weights that are triggered by sigmoid activation functions. And all of these components are realized as simple analog blocks that stabilize to localized solutions using hardly any power at all. Analog front-ends such as these could be used to front-end many of our machine learning techniques, giving us that first good guess much more quickly and using much less power. Remember: your brain only uses about 11 W of power. This should be a goal for machine learning... in my humble opinion.
Once we are armed with a good guess, we could transfer the problem to the digital realm and perfect the answer in our usual manner to its final polish and luster.
At this final point, I can indeed (re: Pangloss) say, "This is the best of all possible worlds."
Opinions expressed by DZone contributors are their own.