by Nicola Phillips, Copywriter and Dip Patel, CTO
Chips beget more chips.
In 1965, Gordon Moore predicted that the number of transistors on a microchip would double every two years while the cost of those transistors would halve. Moore intended for his prediction to play out only over the next decade (tops). But for half a century, Moore’s Law has largely proven true.
Computer chips have continued, over the last 50 years, to do more and cost less — at an accelerating pace. Today, a chip the size of a fingernail can hold billions of transistors.
In Dip’s Chips Part 1, we covered an overview of chips and how crypto mining is driving their evolution. If you missed it, catch up here.
The term “chips” actually covers a bunch of things, but for the sake of this discussion, we’re interested in a subset of chips called processors. Processors do the math that allows all of the other systems to function.
There are a growing number and variety of processors. The most well-known is the Central Processing Unit (CPU), a general purpose processing chip. CPUs can do a lot of different tasks, allowing them to function essentially as the brain of a computer or smartphone.
CPUs have dominated the chip scene for the past 50+ years.
In this installment of Dip’s Chips, our focus is on a different type of chip, the Application Specific Integrated Circuit (ASIC), and the next decade of chip development for asynchronous tasks.
ASICs and a turning tide
ASICs don’t have the breadth of CPUs, but they have far more depth. ASICs, as the name would suggest, perform one task (or perhaps a few tasks) incredibly well.
ASIC chips make very specific things considerably more efficient and considerably cheaper. Let’s say you’re a company that does a specific type of encryption. If you try to use a general processing chip for that, you’re paying for a broad range of capabilities that you don’t actually need to perform that one task. (And you’re paying a lot for them.) A specialized chip designed specifically for that exact type of encryption is much more cost-effective.
The last 30 years were defined by an extraordinary amount of software development. CPUs were good enough and fast enough that it didn’t make sense to try to build a bunch of specialized chips, especially if software couldn’t keep up. So since the 90s, most of the industry’s energy has gone into developing software.
The impact has been significant. Software doesn’t look anything like what it did in the 90s. I’m not a coder, but if I was inclined to I could write code on the same device I’m using to write this blog, in a Word doc if I really wanted to. Eight-week courses can now yield six figure developer jobs. Software development has gotten both really good and really fast. Now, it’s the hardware that needs to catch up again.
In response, specialized chip development has taken off. Billions of dollars are being spent to figure out how to make chips — more of them, and different and better ones. Over the coming years, it’s only going to become easier and more accessible to design, prototype, test, and produce chips that are more specialized than a CPU.
One innovation leading that evolution is chip simulation. Chip simulations are not new. But in 2006, the process was prohibitively expensive and thus rare. Now, chip simulators are entering the market — and the mainstream.
The growth of asynchronous tasks
Twenty years ago, there were basically two types of chips: GPUs (Graphic Processing Units) and CPUs. Today, there are hundreds. That number is likely to increase by 10x or 100x over the next decade. And more and more of these chips will be customized for a specific purpose.
Many will be designed for asynchronous tasks (i.e., applications that don’t happen in real time, that can be batched). After a hiatus, the demand for batch processing is growing again.
A lot of asynchronous tasks fall under the broad bucket of AI. AI is, at its core, extrapolating insights from data.
The data each of us generates every day keeps growing exponentially. That data tends to be persistent, with no expiration, and more and more companies are focused on gaining a competitive advantage by gathering more/better/different insights from it.
There’s a chip for that
The advent of more specialized chips for asynchronous tasks will change the way that we gather data. Take Alexa as an example. The “brain” that powers Amazon’s digital assistant isn’t software — it’s a collection of microchips, each designed for a specific task.
Or, let’s say you have a smartwatch — a Garmin or Apple watch. You can log on to an app on your phone and see all of these individualized pieces of data that your watch has gathered over the course of a day — your heart rate or places visited. This used to be done by software, but now it’s done by a chip that was designed to do this single task — sense your heart rate or location — and nothing else. It does this one task really, really well.
In fact, a ton of what’s done right now using software could be done with a specialized chip.
This will change the role of developers. The software-only jobs of today will evolve to the software plus hardware jobs of tomorrow. The hackers of today will be the chip hackers of tomorrow. As it always does, the pendulum will swing back. Maybe the next 30 years won’t be defined by significant hardware advances, but the next five certainly will, and likely the next ten-plus.
Chip development will continue to change the way we compute. But what’s particularly exciting to us is that computing advances will also change the way we design chips. As our processes for extrapolating and evaluating data get smarter, we will develop chips that can handle these ever-more complex tasks.
A major chip transformation is coming, and it’s a tidal wave. We’re excited to be at its crest. Soluna is at the forefront of batchable computing powered by stranded renewable energy. Check out our recent projects here.