Why the Future of AI & Computers Will Be Analog
Summary
TLDRThe video script discusses the resurgence and potential of analog computing in a world dominated by digital technology. It highlights the energy efficiency of analog systems, which can be 1,000 times more efficient than digital counterparts, and how this could be part of the solution to the climate crisis. The script also touches on the limitations of digital computing, such as physical boundaries and energy consumption, and introduces companies like Mythic and Aspinity that are developing analog chips for modern applications. The potential for hybrid computers that combine the best of both worlds is also explored, hinting at a future where analog computing could play a significant role in our daily lives.
Takeaways
- 📺 Analog computing, once overshadowed by digital, is experiencing a resurgence due to its potential energy efficiency and unique problem-solving capabilities.
- 🌡️ Analog systems have an infinite number of states compared to digital systems, which rely on a finite number of states determined by bits or transistors.
- 🚀 The Space Age and personal computers marked a decline in the size of computing devices, but analog computing might be reaching physical limits in terms of miniaturization.
- 💡 Digital computing, particularly in areas like AI and cryptocurrencies, is increasingly energy-intensive, prompting interest in more efficient alternatives like analog computing.
- 🌍 A return to analog computing could significantly reduce energy consumption, with analog processes sometimes being 1,000 times more efficient than digital ones.
- 🛠️ Analog computers operate based on physical models that correspond to the values of the problem being solved, as opposed to digital computers that follow algorithms and discrete data.
- 📉 The limitations of digital computing are being recognized, with experts like Bernd Ulmann suggesting that we are approaching the fundamental physical boundaries of digital elements.
- 🔧 Analog computing's continuous data processing allows for real-time problem-solving and efficient parallel processing without the need for cooling facilities.
- 🔄 Hybrid computers that combine the energy efficiency of analog with the precision of digital are being explored for future technology development.
- 🏠 Everyday applications of analog computing could include low-power sensors for voice-enabled devices, environmental monitoring, and wearable technology.
Q & A
What is the fundamental difference between analog and digital computing?
-Analog systems have an infinite number of states and can represent a continuous range of values, while digital systems rely on a finite number of states determined by the number of bits or transistors that can be switched on or off.
How has the advancement of digital computing impacted the size of computing devices?
-Digital computing has led to a significant reduction in the size of computing devices, from large machines to personal computers and smartphones, following the predictions of Moore's Law which suggests a doubling of transistors on integrated circuits approximately every two years.
What are some of the environmental concerns associated with digital computing?
-Digital computing, especially in data centers and power-hungry applications like cryptocurrencies and AI, is becoming increasingly energy-intensive, contributing to global energy consumption and carbon emissions. It also requires substantial cooling systems, which can strain water resources.
Why is analog computing considered more energy-efficient than digital computing?
-Analog computing can perform the same tasks as digital computing with a fraction of the energy because it operates on a physical model corresponding to the problem being solved, which doesn't require the switching of transistors and can handle continuous data in real time.
What is the significance of the MONIAC computer in the history of analog computing?
-The MONIAC (Monetary National Income Analogue Computer), created by economist Bill Phillips in 1949, is a classic example of analog computing. It was designed to simulate the Great British economy on a macro level using water to represent money flow, and it could function with an approximate accuracy of ±2%.
What are some practical applications of analog computing today?
-Practical applications of analog computing today include flight computers used by pilots for manual calculations, as well as emerging technologies like low-power sensors for voice-enabled wearables, sound detection systems, and heart rate monitors.
How does the concept of Amdahl's law relate to the limitations of digital computing?
-Amdahl's law suggests that the speedup of a system is limited by its sequential operations that cannot be parallelized. As a result, adding more processors does not always lead to proportional improvements in speed, which is a challenge for digital computers when trying to handle increasingly complex tasks efficiently.
What are some of the challenges in integrating analog and digital systems?
-Integrating analog and digital systems requires seamless connectivity and synchronization between the two paradigms, which can be technically challenging. It also involves developing hybrid computers that combine the energy efficiency of analog with the precision and flexibility of digital computing.
What is the potential impact of analog computing on machine learning and AI?
-Analog computing has the potential to significantly reduce the power consumption of machine learning and AI applications by offering a more energy-efficient computing method. Companies like Mythic are developing analog matrix processors that aim to deliver the compute resources of a GPU at a fraction of the power consumption.
How might analog computing change the devices we use in our daily lives?
-As analog computing becomes more integrated with digital systems, we could see devices that are always on, like voice-enabled wearables and smart home sensors, consuming much less power. This could lead to longer battery life and reduced environmental impact without sacrificing functionality.
What are some ways for individuals to explore analog computing at home?
-Individuals can explore analog computing at home through models like the Analog Paradigm Model-1, which is designed for experienced users to assemble themselves, or The Analog Thing (THAT), which is sold fully assembled and can be used for a variety of applications from simulating natural sciences to creating music.
Outlines
📺 The Resurgence of Analog Computing
This paragraph introduces the concept of analog computing and its resurgence in modern technology. It discusses the shift from analog to digital computing and the potential of analog computing to impact daily life. The speaker, Matt Ferrell, shares his curiosity about analog computing sparked by a Veritasium video and his subsequent exploration of the topic. The contrast between analog and digital systems is highlighted, emphasizing the infinite states of analog versus the finite states of digital, represented by bits. The energy efficiency of analog computing is also mentioned as a potential solution to the growing energy demands of digital computing, particularly in the context of cryptocurrencies and AI.
💡 Historical Analog Computers and Their Applications
This paragraph delves into the history and practical applications of analog computers. It mentions the Moniac National Income Analogue Computer (MONIAC) as a prime example, which was designed to simulate the British economy. The paragraph also discusses the accuracy of analog computers and their continued relevance, such as pilots using slide rules for calculations. The contrast between the convenience of digital devices and the specialized applications of analog computers is explored, highlighting the limitations of digital computing and the potential for analog computing to break through these barriers.
🚀 Pushing the Limits of Digital Computing
This paragraph examines the limitations of digital computing, referencing the predictions made by Gordon Moore, known as Moore's Law, and the physical boundaries that digital elements are reaching. It discusses the challenges of miniaturizing computer chips further and the heat generation and cooling requirements of dense components. The paragraph also touches on Amdahl's law and its implications for the efficiency of digital computers, especially when considering sequential operations and the diminishing returns of adding more processors. The potential of analog computing to offer a more parallel and energy-efficient approach is contrasted with the sequential nature of digital computing.
🌐 Future of Analog Computing in Everyday Life
The final paragraph explores the future possibilities of analog computing in everyday life, discussing the development of hybrid computers that combine the energy efficiency of analog with the precision of digital. It mentions companies like Mythic and Aspinity that are working on analog chips for machine learning and low-power sensors. The potential applications of analog computing in household devices are highlighted, such as voice-enabled wearables and heart rate monitoring. The paragraph also addresses the challenges of making analog programming more accessible and the need for seamless connectivity between analog and digital systems. It concludes with a call to action for the audience to consider the potential of analog computing and engage in further discussion.
Mindmap
Keywords
💡Analog computing
💡Digital computing
💡Energy efficiency
💡Moore's Law
💡Amdahl's Law
💡Hybrid computing
💡Machine learning
💡Data centers
💡Differential equations
💡Surfshark
💡Climate crisis
Highlights
Analog computing is making a comeback and is also something that never really left.
Analog systems have an infinite number of states, unlike digital systems which rely on a fixed number of states.
Digital computing is becoming increasingly energy intensive, with significant implications for global energy consumption.
Analog computing could be part of the solution to energy efficiency, as it can accomplish tasks for a fraction of the energy.
The MONIAC, created in 1949, is an example of an analog computer used to simulate the economy.
Pilots still use flight computers, a form of slide rule, for calculations without the need for electricity.
Digital devices provide convenience, but analog computing has its own strengths, such as energy efficiency.
Digital computers are hitting basic physical boundaries, limiting how much further they can be shrunk.
Moore's Law, which predicts the doubling of transistors on a chip, is nearing its limits.
The more components on a chip, the harder it is to cool, leading to significant energy and resource use.
Research on new approaches to analog computing has led to the development of materials that don’t need cooling facilities.
Amdahl's law suggests that there will always be operations that must be performed sequentially in digital computing.
Analog computers can work in parallel, allowing for more efficient problem-solving without the need for sequential operations.
Hybrid computers that combine the best features of both digital and analog computing may be the future.
Mythic's Analog Matrix Processor chip aims to deliver significant compute resources at a fraction of the power consumption.
Aspinity's AML100 chip can act as a low-power sensor for various applications, with potential energy savings of up to 95%.
Analog computing, with its potential for energy efficiency and real-time processing, could become more approachable and accessible.
Transcripts
If your taste in TV is anything like mine, then most of your familiarity with
what analog computing looks like probably comes from the backdrops of something like
Columbo. Since digital took over the world, analog has been sidelined into what seems
like a niche interest at best. But this retro approach to computing, much like space operas,
is both making a comeback, and also something that never really left in the first place.
I found this out for myself about a year ago, when a video from Veritasium sparked
my curiosity about analog computing. After that, I started to read a few articles here and there,
and I gotta say…it broke my brain a bit. What I really wanted to know, though, was this:
How can analog computing impact our daily lives? And what will that look
like? Because I definitely don’t have room in my house for this.
I’m Matt Ferrell … welcome to Undecided.
This video is brought to you by Surfshark and all of my patrons on Patreon, but more on that later.
Depending on how old you are, you may remember when it was the norm for a single computer to
take up more square footage than your average New York City apartment. But after the end of the
Space Age and the advent of personal computers, our devices have only gotten smaller and smaller.
Some proponents of analog computing argue that we might just be reaching our limits when it
comes to how much further we can shrink. We’ll get to that in a bit, though. Emphasis on bits.
Speaking of bits, this brings us to the fundamental difference between analog
and digital. Analog systems have an infinite number of states. If I were to heat this room
from 68 F to 72 F, the temperature would pass through an infinite set of numbers,
including 68.0000001 F and so on. Digital systems are reliant on the number of “bits”
or the number of transistors that are switched either on or off. As an example,
an 8-bit system has 2^8, or 256 states. That means it can only represent 256 different numbers.
So, size isn’t the only aspect of the technological zeitgeist that’s changed. Digital
computers solve problems in a fundamentally different way from analog ones. That’s led
to some pretty amazing stuff in modern day…at a cost. Immensely energy intensive computing
is becoming increasingly popular. Just look at cryptocurrencies and AI. According to a report
released last year by Swedish telecommunications company Ericsson, the information and
communication technology sector accounted for roughly 4% of global energy consumption in 2020.
Plus, a significant amount of digital computing is not the kind you can take to
go. Just among the thousands of data centers located across the globe, the average campus
size is approximately 100,000 square feet (or just over 9,000 square meters). That's more
than 2 acres of land! Data scientist Alex de Vries estimates that a single
interaction with a LLM is equivalent to “leaving a low-brightness LED lightbulb on for one hour.”
But as the especially power-hungry data centers, neural networks, and cryptocurrencies of the world
continue to grow in scale and complexity…we still have to reckon with the climate crisis. Energy
efficiency isn’t just good for the planet, it’s good for the wallet. A return to analog
computing could be part of the solution. The reason why is simple: you can accomplish the
same tasks as you would on a digital setup for a fraction of the energy. In some cases,
analog computing is as much as 1,000 times more efficient than its digital counterparts.
Before we get into exactly how it works and why we’re starting to see more interest in analog
computers again, I need to talk about another piece of tech that can really help in your daily
digital life and that’s today’s sponsor, Surfshark. Surfshark is a fast, easy to
use VPN full of incredible features that you can install on an unlimited number of devices with one
account. Most of the time when we talk about VPNs we’re focused on giving yourself security as you
travel around the world, but it can do way more than that. Since you can make it look like your
IP address is coming from somewhere else in the world, it unlocks geofencing blocks on content,
like streaming services. But … that’s not all. Even shopping services will sometimes
gate prices based on your location, so you can change your location to make sure you’re getting
the best prices. They also have add-ons to their VPN service to unlock things like Surfshark Alert,
which will let you know if your email or personal details, like passwords, have been leaked online
in a data breach. Right now they’re running a special deal … use my code UNDECIDED to get up
to 3 additional months for free. SurfShark offers a 30-day money-back guarantee,
so there’s no risk to try it out for yourself. I’ve been using Surfshark for years and love it.
Link is in the description below. Thanks to Surfshark, for supporting the channel.
And thanks to all of you, as well as my patrons, who get early, ad-free versions of my videos. So
back to how much more energy efficient analog computing is from its digital counterparts.
To understand how that works, exactly, we first need to establish what makes analog
computing…analog. The same way you would make a comparison with words using an analogy,
analog computers operate using a physical model that corresponds to the values of
the problem being solved. And yeah, I did just make up an analog analogy.
A classic example of analog computing is the Monetary National Income Analogue Computer,
or MONIAC, which sounds like a long forgotten car brand, which economist Bill Phillips created in
1949. MONIAC has a single purpose: to simulate the Great British economy on a macro level.
Within the machine, water represented money as it literally flowed in and out of the treasury.
Phillips determined alongside his colleague Walter Newlyn that the computer could function
with an approximate accuracy of ±2%. And of the 14 or so machines that were made,
you can still find the first churning away at the Reserve Bank Museum in New Zealand.
It’s safe to say that the MONIAC worked (and continues to work) well. The same goes
for other types of analog computers, from those on the simpler end of the spectrum,
like the pocket-sized mechanical calculators known as slide rules,
to the behemoth tide-predicting machines invented by Lord Kelvin.
In general, it was never that analog computing didn’t do its job — quite the opposite. Pilots
still use flight computers, a form of slide rule, to perform calculations by hand,
no juice necessary. But for more generalized applications, digital devices just provide a level
of convenience that analog couldn’t. Incredible computing power has effectively become mundane.
To put things into perspective, an iPhone 14 contains a processor that runs somewhere
above 3 GHz, depending on the model. The Apollo Guidance Computer, itself a
digital device onboard the spacecraft that first graced the moon’s surface,
ran at…0.043 MHz. As computer science professor Graham Kendall once wrote,
“the iPhone in your pocket has over 100,000 times the processing power of the computer that landed
man on the moon 50 years ago.” … and we use it to look at cat videos and argue with strangers.
In any case, that ease of use is one of the reasons why the likes of slide rules and
abacuses were relegated to museum displays while electronic calculators reigned king.
So much for “ruling.” But, while digital has a lot to offer, like anything else,
it has its limits. And mathematician and self-described “analog computer
evangelist” Bernd Ulmann argues that we can’t push those limits much further. In his words:
“Digital computers are hitting basic physical boundaries by now. Computing
elements cannot be shrunk much more than today,
and there is no way to spend even more energy on energy-hungry CPU chips today.”
It’s worth noting here that Ulmann said this in 2021, years ahead of the explosion
of improvements in generative AI we’ve witnessed in just the past few months,
like OpenAI’s text-prompt-to-video model, Sora. Which, really disturbs
me and I'm very excited by all at the same time, I need to make a video about that.
But what did he mean by “physical boundaries”? Well…digital computing
is starting to bump up against the law. No, not that kind…the scientific kind.
There’s actually a few that are at play here. We’ve already started talking about
the relationship between digital computing and size, so let’s continue down that track.
In a 1965 paper, Gordon Moore, co-founder of Intel, made a prediction that would come to
be known as “Moore’s Law.” He foresaw that the number of transistors on an integrated
circuit would double every year for the next 10 years, with a negligible rise in
cost. And 10 years later, Moore changed his prediction to a doubling every two years.
As Intel clarifies, Moore’s Law isn’t a scientific observation, and Moore actually isn’t too keen on
his work being referred to as a “law.” However, the prediction has more or less stayed true as
Intel (and other semiconductor companies) have hailed it as a goal to strive for:
more and more transistors on smaller and smaller chips, for less and less money.
Here’s the problem. What happens when we can’t make a computer chip any smaller? According to
Intel, despite the warnings of experts in the past few decades, we’ve yet to hit that wall. We can
take it straight from Moore himself, though, that an end to the standard set by his law is
inevitable. When asked about the longevity of his prediction during a 2005 interview, he said this:
“The fact that materials are made of atoms is the fundamental limitation and it's not that
far away. You can take an electron micrograph from some of these pictures of some of these devices,
and you can see the individual atoms of the layers. The gate insulator in the most
advanced transistors is only about three molecular layers thick…We're pushing up against some fairly
fundamental limits, so one of these days we're going to have to stop making things smaller."
Not to mention, the more components you cram onto a chip, the hotter it becomes during use,
and the more difficult it is to cool down. It’s simply not possible to use all the transistors
on a chip simultaneously without risking a meltdown. This is also a critical problem
in data centers, because it’s not only electricity use that represents a huge
resource sink. Larger sites that use liquid as coolant rely on massive amounts of water
a day — think upwards of millions of gallons. In fact, Google’s data centers in The Dalles,
Oregon, account for over a quarter of the city’s water use.
Meanwhile, emerging research on new approaches to analog computing has
led to the development of materials that don’t need cooling facilities at all.
Then there’s another law that stymies the design of digital computers:
Amdahl’s law. And you might be able to get a sense of why it’s relevant just by looking at
your wrist. Or your wall. Analog clocks, the kind with faces, can easily show us more advantages of
analog computing. When the hands move forward on a clock, they do so in one continuous movement,
the same way analog computing occurs in real time, with mathematically continuous data. But when you
look at a digital clock, you’ll notice that it updates its display in steps. That’s because,
unlike with analog devices, digital information is discrete. It’s something that you count,
rather than measure, hence the binary format of 0s and 1s.
When a digital computer tackles a problem, it follows an algorithm, a finite number
of steps that eventually lead to an answer. Presenting a problem to an analog computer is
a completely different procedure, and this cute diagram from the ‘60s still holds true today:
First, you take note of the physical laws that form the context of the problem you’re
solving. Then, you create a differential equation that models the problem. If your
blood just ran cold at the mention of math, don’t worry. All you need to know
is that differential equations model dynamic problems, or problems that involve an element
of change. Differential equations can be used to simulate anything from heat flow in a cable
to the progression of zombie apocalypses. And analog computers are fantastic at solving them.
Once you’ve written a differential equation, you program the analog computer by translating
each part of the equation into a physical part of the computer setup. And then you get your answer,
which doesn’t even necessarily require a monitor to display!
All of that might be tough to envision, so here’s another analog analogy that hopefully
is less convoluted than the labyrinth of wires that make up a patch panel. Imagine a playground.
Let’s say two kids want to race to the same spot, but each one takes a different path.
One decides to skip along the hopscotch court, and the other rushes to the slide. Who will win?
These two areas of the playground are like different paradigms of computing.
You count the hopscotch spaces outlined on the ground and move between them one by one,
but you measure the length of a slide, and reach its end in one
smooth move. And between these two methods of reaching the same goal,
one is definitely a much quicker process than the other…and also takes a lot less energy.
There are, of course, caveats to analog. If you asked the children in our playground
example to repeat their race exactly the same way they did the first time, who do you think
would be more accurate? Probably the one whose careful steps were marked with neat squares,
and whose outcomes will be the same — landing right within that final little perimeter of
chalk. With discrete data, you can make perfect copies. It’s much harder to create copies with
the more messy nature of continuous data. The question is: do we even need 100% accurate
calculations? Some researchers are proposing that we don’t, at least not all the time.
That said, what does this have to do with Amdahl’s law? Well, we can extend our existing scenario
a little further. It takes time to remember the rules of hopscotch and then follow them
accordingly. But you don’t need to remember any rules to use a slide — other than maybe “wait
until there isn’t anybody else on it.” Comment below with your favorite playground accidents!
In any case, because digital computers 1. reference their memories and 2. solve problems
algorithmically, there will always be operations (like remembering hopscotch rules) that must be
performed sequentially. As computer science professor Mike Bailey puts it, “this includes
reading data, setting up calculations, control logic, storing results, etc.” And because you
can’t get rid of these sequential operations, you run into diminishing returns as you add
more and more processors in attempts to speed up your computing. You can’t decrease the size of
components forever, and you can’t increase the number of processors forever, either.
On the other hand, analog computers don’t typically have memories they
need to take time to access. This allows them more flexibility to work in parallel,
meaning they can easily break down problems into smaller,
more manageable chunks and divide them between processing units without delays.
Here’s how Bernd Ulmann explains it In his 2023 textbook, Analog and Hybrid Computer Programming,
which contributed a considerable amount of research to this video:
“Further, without any memory there is nothing like a critical section,
no need to synchronize things, no communications overhead,
nothing of the many trifles that haunt traditional parallel digital computers.”
So, you might be thinking: speedier, more energy-efficient computing sounds great,
but what does it have to do with me? Am I going to have to learn how to write differential equations?
Will I need to knock down a wall in my office to make room for a retrofuturist analog rig?
Probably not. Instead, hybrid computers that marry the best features of both digital and
analog are what might someday be in vogue. There’s already whisperings of Silicon Valley
companies secretly chipping away at…analog chips. Why? To conserve electricity … and
cost. The idea is to combine the energy efficiency of analog with the precision of
digital. This is especially important for continued development of the power-hungry
machine learning that makes generative AI possible. With any hope, that means
products that are far less environmentally and financially costly, to maintain.
And that’s exactly what Mythic, headquartered in the U.S., is aiming for. Mythic claims that its
Analog Matrix Processor chip can “deliver the compute resources of a GPU at 1/10th the power
consumption.” Basically, as opposed to storing data in static RAM, which needs an uninterrupted
supply of power, the analog chip stores data in flash memory, which doesn’t need power to
keep information intact. Rather than 1s and 0s, the data is retained in the form of voltages.
Where could we someday see analog computing around the house, though? U.S.-based company
Aspinity has an answer to that. What it calls the “world’s first fully analog
machine learning chip,” the AML100, can act as a low-power sensor for a bunch of applications,
according to its website. It can detect a wake word for use in voice-enabled wearables like
wireless earbuds or smart watch, listen for the sound of broken glass or smoke alarms,
and monitor heart rates, just to name a few.
For those devices that always need to be on, this means energy savings that are
nothing to sneeze at (although I guess you could program an AML 100 to detect
sneezes). Aspinity claims that its chip can enable a reduction in power use of 95%.
So, the potential of maximizing efficiency through analog computing is clear,
and the world we interact with every day is itself analog. Why shouldn’t our devices be, too? But to
say that analog programming appears intimidating (and dated) is…somewhat of an understatement.
It’ll definitely need an image upgrade to make it approachable and accessible to
the public — though there are already models out there that you can fiddle with yourself at home,
if you’re brave enough. German company Anabrid, which was founded by Ulmann in 2020,
currently offers two: the Analog Paradigm Model-1, and The Analog Thing (or THAT).
The Model-1 is intended for more experienced users who are willing to assemble the machine
themselves. Each one is produced on demand based on the parts ordered,
so you can tailor the modules to your needs.
THAT, on the other hand…and by THAT I mean THAT: The Analog Thing, is sold fully assembled. You
could also build your own from scratch — the components and schematics are open source.
So what do you actually do with the thing? Y’know…THAT?
I’ll let the official wiki’s FAQ answer that:
“You can use it to predict in the natural sciences, to control in engineering,
to explain in educational settings, to imitate in gaming, or you can use it for the pure joy of it.”
The THAT model, like any analog computer, solves whatever you can express in a
differential equation. As a reminder, that’s basically any scenario involving change,
from simulating air flow to solving heat equations. You can also make music!
But as analog computing becomes more readily available, there’s still a
lot of work to be done. For one thing, It’ll take effort to engineer seamless
connectivity between analog and digital systems, as Ulmann himself points out.
Until then, what do you think? Should we take the word of analog evangelists as gospel? Or
are we better off waiting for flying cars? Jump into the comments and let me know. Be
sure to check out my follow-up podcast, Still To Be Determined, where we'll be discussing
some of your feedback. Before I go, I’d like to welcome new Supporter+ patrons
Charles Bevitt and Tanner. Thanks so much for your support. I’ll see you in the next one.
5.0 / 5 (0 votes)