Over a million developers have joined DZone.

Analog Computers Return: Finally

Ask almost anyone under the age of 40 to think of a computer and they will think of a digital computer. But serious computing began with analog computers dating back to the ancient Greeks, or even earlier. In the last half-century it has fallen out of favor, but now... Analog is coming back!

· Performance Zone

Evolve your approach to Application Performance Monitoring by adopting five best practices that are outlined and explored in this e-book, brought to you in partnership with BMC.

It's quite likely that most of the people reading this have never touched or even seen an analog computer, let alone have programmed one. But analog computers were the first true computing devices and their history goes back thousands of years.

The ancient Astrolab was an astronomical analog computer; so is a sundial; and so is an hourglass.

For those of you that don't know the distinction between digital and analog computing, it is usually and succinctly described as the difference between counting and measuring. Digital computers count things and operate on the discrete representation of those counts. Analog computers measure things and operate by blending those measures in some way and measuring the resulting blend.

Here are a few of the advantages and disadvantages to both approaches.

Digital

Pro

  1. Exact computations (at least to the precision of the binary numbers used)

  2. Not influenced by how much time the computation requires

  3. Can easily represent processing branches and control program flow

Con

  1. Requires a large number of state (transistor) changes to do simple things (e.g. addition)

  2. Data must be explicitly accessed and stored in separate circuits

  3. Continuous functions must be approximated with repetitive computations (e.g. integrals)

Analog

Pro

  1. Very few components accomplish very sophisticated mathematics

  2. Computations are inherently continuous (result changes instantly when input changes)

  3. Power requirements are minuscule

Con

  1. Answers are approximate (.1% ... usually adequate for engineering)

  2. Writing a program is more like designing an electrical circuit

  3. Inputs and outputs are signals, not numbers (answers are curves, not tables)

Of course a most important analog computer—and the foundation of much of our modern science and engineering—was invented in the early 1600s: the slide rule. John Napier published a paper on the concept of the logarithm, and shortly thereafter an Anglican minister William Oughtred (1575–1660)  invented the circular slide rule.

In case you've forgotten how logarithms work, a logarithm is defined as: A quantity representing the power to which a fixed number (the base) must be raised to produce a given number. Here is a short animated illustration of how they work. Oughtred seized on the additive property of logarithms: adding the logarithms of X and Y together results in the logarithm of the product of X and Y. He realized that two rulers with logarithmic scales could be moved (slide) relative to each other thus adding the logarithms. But, because the scales were logarithmic, the position of the resultant sum would directly indicate the product of those two numbers.

Before you summarily dismiss the slide rule as a quaint historical artifact remember that the SR 71 Blackbird and the Saturn V moon rockets were designed using slide rule computation. Of course, by the later 1960s electronic computers were used for these engineering types of tasks, but they were not digital computers. These were analog computers (although by the late 1970s they were technically hybrid computers that used a digital front end to administrate and set up the analog program and initial data which then handed off control to the analog components to do the heavy number crunching).

It turns out that analog computation is ideally suited to complex differential and integral calculus problems involving time. (I won't go into the theory for things like analog integration, but there's a link at the bottom of the article that provides a clear introduction to the principles involved.) Not only does it solve an equation but it does so continuously. And even inexpensive hobby-grade integrated circuit amplifiers have sufficient bandwidth to support sampling the result at more than 1 million samples per second.

Here's a very old example of a computer you can build to solve the "damped weight on a spring" problem.

It solves this integral:Image title

Using this circuit:

Image title

Figure 1

Even large digital computers at the time did not have the instruction speed to keep up with this very low-cost and very low-power circuit (Figure 1). Remember, a digital computer has to recompute a solution at the desired temporal resolution (1 million solutions per second), and each solution requires the computation of many subcomponents at that same temporal resolution. But, even if they could keep up in time, they were vastly different when it came to power requirements. Digital computers had recently become "solid-state". Large-scale integrated circuits didn't exist yet, but there were various chips that had things like AND gates, HALF-ADDERs, etc. A large digital computer was comprised of enough of these circuits to amount to 500,000 to 1 million individual transistors. The analog computer solution in Figure 1 above (which uses components that existed in the early 1970s) uses three amplifiers with approximately 20 transistors per amplifier making a total of 60 transistors. The entire analog solution circuit uses substantially less power than a single indicator light on the digital computer's control panel. So it's not too hard to see the appeal of the analog computer at the time.

Of course digital computers have become more power-efficient since the 1970s, but so have analog circuits. Current integrated circuit operational amplifiers—which have better performance than in the example above—draw microamps of current, and a modern version of the previously described circuit (figure 1) could run for months on the power from a AAA battery. So, the relative advantages persist.

Also, in the quest for power reduction, digital and analog approaches are becoming more similar. There is a significant effort in the industry to lower the power of digital processors by lowering the voltage at which the chip operates. This lowers the digital switching voltage threshold and creates a situation where some of the binary signals are corrupted because the threshold differences between ones and zeros are misread. Power is saved at the expense of accuracy. The goal of this approach is to allow the inherently perfect digital computations to degrade to a predefined acceptable level of error. For engineering purposes that level of error (as mentioned previously) is on the order of .1% and is comparable to the errors that show up in a purely analog computer solution.

The good news is that very serious work is being done on combining the capabilities of digital and analog and creating a new type of hybrid computing chip. And it might be just in time to provide a giant boost to the world of deep neural nets. The idea of adding a large number of weighted inputs into an output is an ideal problem for an analog computer, and in fact such work is being done on specialized neural computing chips that use analog technology. In some sense neural net computing is going back to its roots. The first large-scale, real-time neural net computer was the Mark 1 Perceptron and it was all analog and even partly mechanical!

If you're interested in an introduction to the basic principles of analog computation then you might find this page on Computational Circuits informative.

Learn tips and best practices for optimizing your capacity management strategy with the Market Guide for Capacity Management, brought to you in partnership with BMC.

Topics:
analog ,digital ,computer

Opinions expressed by DZone contributors are their own.

The best of DZone straight to your inbox.

SEE AN EXAMPLE
Please provide a valid email address.

Thanks for subscribing!

Awesome! Check your inbox to verify your email so you can start receiving the latest in tech news and resources.
Subscribe

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}