It's quite likely that most of the people reading this have never touched or even seen an analog computer, let alone have programmed one. But analog computers were the first true computing devices and their history goes back thousands of years.

The ancient Astrolab was an astronomical analog computer; so is a sundial; and so is an hourglass.

For those of you that don't know the distinction between digital and analog computing, it is usually and succinctly described as the difference between counting and measuring. Digital computers count things and operate on the discrete representation of those counts. Analog computers measure things and operate by blending those measures in some way and measuring the resulting blend.

Here are a few of the advantages and disadvantages to both approaches.

### Digital

#### Pro

Exact computations (at least to the precision of the binary numbers used)

Not influenced by how much time the computation requires

Can easily represent processing branches and control program flow

#### Con

Requires a large number of state (transistor) changes to do simple things (e.g. addition)

Data must be explicitly accessed and stored in separate circuits

Continuous functions must be approximated with repetitive computations (e.g. integrals)

### Analog

#### Pro

Very few components accomplish very sophisticated mathematics

Computations are inherently continuous (result changes instantly when input changes)

Power requirements are minuscule

#### Con

Answers are approximate (.1% ... usually adequate for engineering)

Writing a program is more like designing an electrical circuit

Inputs and outputs are signals, not numbers (answers are curves, not tables)

Of course a most important analog computer—and the foundation of much of our modern science and engineering—was invented in the early 1600s: the slide rule. John Napier published a paper on the concept of the logarithm, and shortly thereafter an Anglican minister William Oughtred (1575–1660) invented the circular slide rule.

In case you've forgotten how logarithms work, a logarithm is defined as: A quantity representing the power to which a fixed number (the base) must be raised to produce a given number. Here is a short animated illustration of how they work. Oughtred seized on the additive property of logarithms: adding the logarithms of X and Y together results in the logarithm of the product of X and Y. He realized that two rulers with logarithmic scales could be moved (slide) relative to each other thus adding the logarithms. But, because the scales were logarithmic, the position of the *resultant sum* would directly indicate the *product *of those two numbers.

Before you summarily dismiss the slide rule as a quaint historical artifact remember that the SR 71 Blackbird and the Saturn V moon rockets were designed using slide rule computation. Of course, by the later 1960s electronic computers were used for these engineering types of tasks, but they were not digital computers. These were analog computers (although by the late 1970s they were technically hybrid computers that used a digital front end to administrate and set up the analog program and initial data which then handed off control to the analog components to do the heavy number crunching).

It turns out that analog computation is ideally suited to complex differential and integral calculus problems involving time. (I won't go into the theory for things like analog integration, but there's a link at the bottom of the article that provides a clear introduction to the principles involved.) Not only does it solve an equation but it does so **continuously**. And even inexpensive hobby-grade integrated circuit amplifiers have sufficient bandwidth to support sampling the result at more than 1 million samples per second.

Here's a very old example of a computer you can build to solve the "damped weight on a spring" problem.

It solves this integral:

Using this circuit:

*Figure 1*

Even large digital computers at the time did not have the instruction speed to keep up with this very low-cost and very low-power circuit (*Figure 1*). Remember, a digital computer has to recompute a solution at the desired temporal resolution (1 million solutions per second), and each solution requires the computation of many subcomponents at that same temporal resolution. But, even if they could keep up in time, they were vastly different when it came to power requirements. Digital computers had recently become "solid-state". Large-scale integrated circuits didn't exist yet, but there were various chips that had things like AND gates, HALF-ADDERs, etc. A large digital computer was comprised of enough of these circuits to amount to 500,000 to 1 million individual transistors. The analog computer solution in Figure 1 above (which uses components that existed in the early 1970s) uses three amplifiers with approximately 20 transistors per amplifier making a total of 60 transistors. The entire analog solution circuit uses substantially less power than a **single indicator light** on the digital computer's control panel. So it's not too hard to see the appeal of the analog computer at the time.

Of course digital computers have become more power-efficient since the 1970s, but so have analog circuits. Current integrated circuit operational amplifiers—which have better performance than in the example above—draw microamps of current, and a modern version of the previously described circuit (figure 1) could run for months on the power from a AAA battery. So, the **relative **advantages persist.

Also, in the quest for power reduction, digital and analog approaches are becoming more similar. There is a significant effort in the industry to lower the power of digital processors by lowering the voltage at which the chip operates. This lowers the digital switching voltage threshold and creates a situation where some of the binary signals are corrupted because the threshold differences between ones and zeros are misread. Power is saved at the expense of accuracy. The goal of this approach is to allow the inherently perfect digital computations to degrade to a predefined acceptable level of error. For engineering purposes that level of error (as mentioned previously) is on the order of .1% and is comparable to the errors that show up in a purely analog computer solution.

The good news is that very serious work is being done on combining the capabilities of digital and analog and creating a new type of hybrid computing chip. And it might be just in time to provide a giant boost to the world of deep neural nets. The idea of adding a large number of weighted inputs into an output is an ideal problem for an analog computer, and in fact such work is being done on specialized neural computing chips that use analog technology. In some sense neural net computing is going back to its roots. The first large-scale, real-time neural net computer was the Mark 1 Perceptron and it was all analog and even partly mechanical!

If you're interested in an introduction to the basic principles of analog computation then you might find this page on Computational Circuits informative.

## {{ parent.title || parent.header.title}}

## {{ parent.tldr }}

## {{ parent.linkDescription }}

{{ parent.urlSource.name }}