Digital is the past—the future is analog. Crackling, hissing, buzzing, messy, full-spectrum analog. The arcane electronic contraptions put people on the moon and they could be the key to true AI, perfect biological simulations, and even room-temperature quantum computing. Featuring a 1960s Smith-Corona electric typewriter and Ahmad Jamal.
This is what I thought about when I first heard the term, “analog computer.” It’s a 1960-something Smith Corona Coronet electric typewriter. It’s beautifully made, with a muted teal case and ivory keys. The plastic has yellowed a bit over the ages, but it’s in perfect working order. The chrome return arm is a work of art, an elegantly sculpted piece of steel that sweeps toward the typist like a sabre. They keys are light and if you’re a brisk typist like me, the typebars blur as the words pile up into paragraphs and pages.
But despite all its magnificence, the Coronet is not really an analog computer. When I strike a key, a switch activates a complex chain of gears and cams that swing the corresponding typebar. The clockwork mechanism is undoubtedly complex, and there is plenty of math involved in that gear train, but it’s not solving any mathematical problems. The Coronet is an analog device. It transforms the input press of a key into the output of a letter on the page. It does this without any instruction from me. It doesn’t have to run any computer code or execute any program between the time I press a key and the time it swings the typebar. It was built to type, and it simply types. This, it turns out, is the crucial concept in analog computing.
But what is Analog? Analog is often put in opposition to digital. But that’s not quite right. It’s more like digital is a representation of analog. The vinyl records in your (or your parent’s) living room were made with an analog process. In the early days, they’d record music directly onto wax. A band or musician would play into a big horn, which would vibrate a needle that cut wax on a rotating cylinder. The rotating cylinder would capture the needle’s vibrations as it spun. After the song was over, you set the needle at the beginning of the cylinder, rotate it, and the needle vibrates. A very tinny rendition of the band’s performance warbles out of the horn. This is how recording music worked until very recently. Sure, things became much more sophisticated—magnetic tape was used instead of wax cylinders—but music was analog.
Analog recordings are a constant unbroken stream of information. When you’re listening to an analog recording, you’re hearing variations in that stream. But it’s a noisy stream. You can hear the noise in the crackle and pop of a record or the hiss of a cassette tape.
When you take that noisy unbroken stream and feed it into a machine that’s built to transform it into the solution to your problem, you get an analog computer.
It’s a strange concept to us, the digital generation. To us, a computer takes written code and uses it to flip tiny switches to either one or zero. It uses this binary yes-no logic to solve problems. Analog computers don’t work this way. At all.
Analog computers take an unbroken stream, like that crackly vinyl recording, and feed it through a series of components that change it in specific ways. They can run the stream through different components to get different results. Say you run a current of electricity through a box with resistors that cut its voltage in half. You’ve just done division. Another box can double the voltage. You have multiplication. In an analog computer, there are many components that can do many specific things like addition, subtraction, multiplication, division, even logarithmic functions. String them together in the right way and you can solve many mathematical problems. You can even use them to solve complex differential equations. In fact, they’re really really good at solving differential equations.
Analog computers were a really big deal in the early days of computing. Digital computers used vacuum tubes and punch cards and were pretty limited in what they could do. No early digital computer could model complex systems or do complex calculations as fast as analog computers. Analog computers were used to calculate rocket trajectories, to model flight paths, and simulate complex systems. Analog computers were crucial to the Apollo program, where they were used to calculate all the stuff early digital computers couldn’t do in a timely manner.
You may have seen pictures of the Apollo engineering labs. Walls of blinking lights and tiny oscilloscope screens matted with webs of countless patch cables. Those were analog computers. Whenever engineers wanted to solve a new problem, they’d manually swap the patch cables around until the system was the correct configuration. There is no program or programming language in analog computing, just different configurations.
It’s a foreign and mind-boggling way to think about computing. But it works really well and has some big advantages over digital computers. For specific tasks, like doing big nasty differential equations, analog computers are more efficient and faster than digital computers. Make the right configuration, run a little power through it and you instantly have your answer. Kinda.
Remember the crackle of a vinyl record, or the hiss of a cassette tape? That noise is present in analog computers, and it means they’re not 100 percent accurate. Imagine a flickering wave on an old-timey oscilloscope. That’s a good way to picture the output of an analog computer. You get your answer, within a certain range. It’s up to you (or a digital computer) to interpret it.
In computing, having less than 100 percent accuracy is pretty terrible. But in many circumstances, approximate answers to your problems are more than good enough. And analog computers have a lot of advantages that more than make up for their lack of accuracy.
Analog computers are super efficient. Every time a digital computer flips a switch, it takes energy. And today’s digital computers have a lot of switches. Millions of them, in fact. Digital computers operate on clock speed, which you’ve probably heard. It’s the number that denotes how many computing cycles a processor can run in a second. A 4Ghz processor runs four million cycles per second. Analog computers don’t have clock speed because they don’t have clocks. They don’t operate in discrete increments, but a continuous flow. That means they can run on just a few watts.
Of course the analog computers we build can only perform very specific tasks. But the gooey analog computer between our ears can easily tackle the most complex and intricate tasks. That’s right, your brain is effectively an analog computer. It doesn’t run “code” or read and write ones and zeroes. It is a vast and insanely complex network of components (brain cells) that perform specific functions. When it needs to calculate something, it finds the right configuration, or connection, to do what it needs. And it does all this automatically with noisy low-voltage electricity within a messy soup of neurotransmitters. Still, it is immensely powerful.
The human brain is responsible for the Lascaux cave paintings, the Great Pyramids, the works of Shakespeare, the Nine Virtues, Jazz, and even science itself. A single human brain has a peak performance of about 38 petaflops, which is about ⅕ the power of the world’s most powerful supercomputer. And it runs on about 20 watts.
The world’s most powerful supercomputer, by the way, is IBM’s Summit. It takes occupies a warehouse and uses about 13 megawatts of power. It runs at a peak of 200 petaflops. So it has approximately the computing power of just six human beings.
Of course we’re comparing apples and oranges, or digital and analog. Which isn’t fair to either. Our brains are capable of magical feats, but aren’t capable of performing the kind of exacting calculations that a supercomputer can tackle. Likewise, supercomputers aren’t capable of directing their own work, or of creating anything on their own (yet). And really, we work with computers. Our analog brains work in concert with digital computers to crack the universe’s greatest mysteries.
And soon analog computers will rejoin the party. New hybrid systems are being developed that will take advantage of analog’s low-power, super-speedy processing. Imagine a small configurable analog computer that can churn out the answers to differential equations embedded within a digital computer. The digital computer could use the analog computer to boost its performance in specific situations where 100 percent accuracy isn’t needed.
Computer scientist Yannis Tsividis at Columbia University is working on a small-scale analog computer that could be used in this way. He and several of his grad students have built working prototypes that can interface with digital computers via digital-analog converters. Their analog computer chip can instantly crack calculations that digital computers can use.
This isn’t a new approach. The hybrid computers that helped put people on the moon worked in much the same way. Now, nearly 60 years later, we have more advanced manufacturing techniques that are far superior to what NASA was working with. Today’s tiny analog computers are faster, more accurate, and more efficient than ever.
So how would we use these hybrid computers? Analog computers are really good at simulating analog processes—like biology or chemistry. Because they work just like those messy real-world systems. It’s conceivable to build an analog computer that could simulate a single cell, or an entire organism. With nearly perfect analog recreations of biological systems, we could test medical treatments instantly. We wouldn’t need animal testing or human trials to know whether a new drug is effective or has side effects.
But why not just make a simulation within a digital computer? Turns out it’s extremely difficult to simulate messy analog processes in a digital environment. You have to sort of fake randomness with time-consuming calculations. Analog computers are naturally noisy and random—you don’t have to compute it at all. And digital computers run on sets of logical instructions, or algorithms. Its extremely difficult to write algorithms to represent natural phenomenon because natural systems don’t run on algorithms. They just run. And so do analog computers.
Professor Rahul Sarpeshkar at Dartmouth college is working to build analog computers based on those natural systems. His team has created an analog cytomorphic silicon chip processor based on a cell that can compute certain math functions 31 times faster than Matlab software. He plans to use the same principals to build an analog computer that emulates quantum phenomenon to create a kind of room-temperature quantum computer. It would use a series of analog components to simulate quantum phenomenon and a replica of the inner ear to help sort the output.
Turns out the inner ear, or cochlea, is the most sophisticated spectrum analyzer we’ve ever seen. It naturally separates a wide spectrum of sound frequencies into high and low, and even has a kind of gain control to help equalize sound.
Sarpeshkar calls his analog inner ear a “quantum cochlea” and it would be used to interpret the signals from his simulated analog quantum computer components. He has submitted a patent for the system, which is available online if you want to read it. It’s not exactly something you can build in your garage. Well maybe you can. If you’re a bored electrical engineer slash computer scientist slash physicist slash biologist. With lots of money. And a staff of researchers.
Analog computers could also play a big role in the coming AI revolution. They are especially good at things that digital computers are bad at, like vision and movement. They can also be used in machine learning to quickly find the “best guess” to complex problems on the fly. You see, when things change in a digital computer, it has to re-run its algorithm from scratch. This takes time and a ton of energy. Analog computers are just always running. If the environment changes, they simply evolve to match. Imagine an analog computer receiving an input signal of X and spitting out Y. If the value of X changes, Y automatically changes. No need to recalculate.
Again, analog computers not mercilessly accurate like digital computers, but they’re fast and flexible. And that makes them perfect for machine learning, especially if the machines have to interact with the real, unpredictable world. Just as long as they don’t decide to chew us up into a digestible slurry to power their super-efficient analog processes…
Expect to hear more about analog computing in the future. I suspect it’ll play an important role in simulating biological systems for drug and medical testing, and will undoubtedly play a role in simulating complex systems in physics. Will they be able to simulate quantum computing at room temperature? I don’t know. I’d love to chat with Professor Sarpeshkar about it, but I’m afraid my brain would melt. Perhaps it’s worth the risk.
You can watch an excellent TED talk by Professor Sarpeshkar on YouTube. He explains the basics of analog computers, their advantages, disadvantages, and how we can model them on biological systems. It’s about 20 minutes long and well worth a watch. I’ll link to it in the show notes.
And that’s it for this episode. Stick around for the next one, when I’ll hopefully explore some more superhero science. I mean, there are like seven Captain Marvels to talk about. Stay tuned for that one.