Resource Center

Back to blog homepage

There’s a new type of data center coming — and it could help save the world

keywords

Batchable Computing, Computing, Data Centers, Demand Response, HPC

By Dip Patel, CTO

There’s a startup worth half a trillion dollars by latest estimates. It was publically launched in 2004.

Take a second and try to think back to that period (it was 15 years ago!).

Want a hint? I present to you one of the best phones in 2004 — the Motorola Razr.

Check out that sweet mini-usb port

There was no smartphone (iPhone was launched in June 2007). There wasn’t ubiquitous, cheap, fast and reliable internet. Wi-Fi was relatively new. Mobile networks were optimized for voice and handoffs (handing off a call from one cell tower to the next without data loss), not high-speed data. It was 2004, when satellite radio was extremely hot tech. It was also the first time the Philadelphia Eagles went to the Superbowl! You there yet?

2004 is when Amazon publicly launched what could become one of the most revolutionary companies — AWS (Amazon Web Services). A company that has since grown to be worth over 55% of Amazon’s total value. Bezos and the AWS team knew that everything would end up on the web, and providing solid infrastructure for that new world would be profound. Think about it. Comcast or Verizon should have seen this coming first.

What’s funny is if you ask a layman what AWS stands for, they wouldn’t know. To most people, Amazon is a company sending them chips (electronic, potato or poker) — or whatever — overnight.

AWS is a software platform that allows just about anyone to quickly deploy scalable and reliable software on the web. A majority of apps, websites and services that run on the “cloud” in 2019 are hosted on a server supported by a company like AWS (Other providers also exist, such as Google and Microsoft). Believe it or not, the cloud actually exists on the surface of our awesome planet. 🙂

So, when you stream a video, download a game or video chat with your goldfish — a significant portion of the work is being done by these servers, and these servers are hosted inside of huge buildings called data centers — and while Software is Eating the World, the data centers that host the software are spreading and growing equally as fast. (Some reports show CAGRs exceeding 14% until 2026!)

The thing is, while this market continues to grow — there is a new data center architecture being born in the shadows. As I mentioned in a previous blog, Dips Chips, new advances in hardware manufacturing is enabling a proliferation of distinct ASICs (Application Specific Integrated Circuits) — chips that are highly specialized to do single tasks very efficiently.

In this blog, I’ll expand on how these chips will drive a new data center architecture — one that will absolutely change the world and usher in a new era of abundant, sustainable energy. There isn’t yet a term for this type of data center, but hopefully you can help me define one!

Current Data Centers: Hyperscale, Hyperspeed, Hyper Complicated

As you look at Amazon’s homepages throughout the years, you’ll notice that they are very customer-centric, and their main focus has been — and still is — providing users access to tools that help their businesses reach scale. They can do this by taking advantage of Amazon’s technology platform.

Amazon knows how to scale — better than almost any company on the planet, and if you review the homepages over the years, you’ll notice that while AWS continued to grow, the core value remained the same. Check out the gallery below to review the home pages for AWS over the years. (Huge shoutout to archive.org and the wayback machine for archiving all this stuff).

You’ll also notice how much the offering has matured and grown over the years but still maintained focus. More on that later.

Amazon’s data centers (and almost all data centers out there today) are focused on the world as we know it — Web 2.0, the real-time web. This is a world where bandwidth (internet speeds) far exceeds the computational load. Delivering a video requires a lot of internets, but not a lot of number crunching.

These data centers are built around content delivery (articles, images, video), and real-time interactions (gaming, financial markets, social media, e-commerce). This means these data centers are designed to be a very specific kind of fast, where real-time interactions are critical. For example, if I click play on a video on the internet, I expect it to play instantly. If I do a video call, I expect no delay or lag time. If I play games with someone across the world, we should both experience the same thing at the same time.

This means data centers have to be tuned very specifically, just like F1 cars.

Look at all the tech to route airflow

F1 cars are fantastic machines, but they are extremely high maintenance — similar to current hyperscale data centers.

Hyperscalers are like very specialized F1 Cars

Here is another way to look at it: The more specialized the requirements, the more expensive the data center becomes to deploy. It also heavily limits WHERE these data centers can be deployed, since finding locations to meet those criteria are a challenge, which makes them even more difficult to deploy.

Let’s examine the typical unit economics for a hyperscale data center (approx. power consumption 21 MW) — and this represents a density of 131.25 W per sqft.

Source: US Chamber of Commerce

I took this data, and created a 15-year financial pro forma and created these two charts.

Pro forma financials for Typical Hyperscale Data Center

As you can see by these two charts, a significant portion of the costs (even 15 years into operation) is IT Equipment, followed by Power, Staff and Construction.

Keep this in mind as we move into the next section. 🙂

New High-Density, Asynchronous, Off-Grid Data Centers

The world is changing. Big data has overrun the planet. Companies are collecting every bit of data they can — even when they don’t yet know what to do with it.

Applications like video processing (rendering, detection), biomedical research, deep climate research, etc. are all tasks that require significant compute resources to processes gigantic data sets— and those represent huge growth sectors.

A lot of these applications can be summed up by the terms Machine Learning or Artificial Intelligence. Basically, where current data centers are optimized for bandwidth-intensive tasks, the new world that is forming is centered around compute-intensive tasks.

The beauty of this — these tasks don’t have to be real-time, they can operate asynchronously. For example, if I need a video rendered, I typically don’t need (or expect) it immediately. In many cases, some video rendering can take significant periods of time.

For example, this 73-second clip from the 2009 Scorsese Film, Hugo, took 171,105 hours of processing time to render. That means it would take 1 computer 20 years to render.

73 Seconds, 171,105 hours to Render

Okay, now we understand that these new types of tasks are growing. Let’s go back to AWS for minute.

We’ve seen how big AWS has become. So, let’s now take a look at their product portfolio. Here’s an album of AWS product pages over the years (thanks archive.org!).

Using the data in this timeline we can make the following table of notable products launched by AWS.

The trend is clear…more and more products focused on compute-intensive tasks are being launched every year. Machine Learning alone has a portfolio of almost 20 products.

Another great example of very compute-intensive tasks are the consensus protocols fundamental in blockchain applications. Hint, hint. 🙂

Now here’s the thing, when you revisit the architecture of current data centers, but throw away all of the real-time, synchronous tasks (video streaming, gaming, etc.) — you can drastically impact the cost basis of the data center. You can summarize the requirements:

The tasks are compute-heavy, rather than bandwidth intensive: You can remove the requirements for extremely fast internet connections.

The tasks are non-real-time, deadline based: You can remove the requirements for 100% availability (since the tasks are deadline based) and low latency internet connections.

Add to this the proliferation of new specific chips for AI, Machine Learning, Rendering, etc. Moving to specific chips brings an order of magnitude increase in efficiency (Compute Output / Energy Consumption) and Price (more specific typically means less expensive). This means a single container full of specialized chips, can outperform an entire data center 10x the size.

If you take all of these factors into consideration, you can completely shift the paradigm for data center design. Instead of developing an F1 car, you can build a work truck. Instead of a huge, complicated building that needs very specific characteristics, you make a shipping container that can be dropped anywhere, abused and still runs efficiently.

Data centers that can be managed by local talent. Locals will only need training in the fundamentals of IT augmented by very smart diagnostic systems.

This will make it easier to find viable sites for data centers.

Data centers that can operate on intermittent internet connections. Since most of the tasks needed are compute intensive, they typically are asking for a computer to solve a puzzle. Be it render a video (solve math to show light reflections) or find a new drug (solve math to evaluate molecules, etc.), these tasks typically receive data, then get to work. Once they are done, they return a relatively, simple answer.

Remember the picture of the blackhole?

This picture still blows my mind!

That was 5 petabytes of data. That’s approximately 1 MILLION hours of 1080p HD video, and the output of all of that can be summarized (compressed) as an incredible image (and of course a ton of incredibly useful data). This is the type of scale we’re talking about here. Compound that with high-speed satellite internet being launched like starlink, a constellation of Very Low Earth Orbit satellites, that will shower the entire planet with fast internet.

This will make it easier to find viable sites for data centers.

The real kicker here, however, is compatibility with intermittent energy. Current data centers need incredible uptime to maintain performance. Uptime is defined by a data center operating under nominal conditions (clean power and clean internet). So, a significant amount of money is spent on battery storage backups and rent due to being in close proximity to very clean power and internet.

What if, when there was extra energy, the computers would spin up and process very fast? What if, when there was no energy, instead of wasting dollars to maintain clean energy, you just shut down the computer?

That’s the game plan. To make a data center that can operate on a variable, off-grid energy.

These factors combined mean these new data centers can effectively be located anywhere.

The High-Density, Asynchronous, Off-Grid Container data center. Look at that! I told you I need help with a new name.

Off-Grid data centers will drive more profits and clean energy development

Recall from earlier that a significant portion of data center costs are driven by IT Equipment, followed by Power, Staff, and Construction. With the new paradigm, we can significantly reduce all of those factors:

IT Equipment: As more and more specialized chips get developed, they will become priced like commodities. Relaxing high-performance bandwidth requirements and using specialized chips will also reduce the complexity and overhead of the IT equipment associated with monitoring, operating and controlling a complicated building.

Power: By co-locating data centers with sustainable energy, and removing the requirement for perfectly stable energy, you can cut the price of power.

Staff: New architecture will allow for staff that require significantly less training to maintain and operate the equipment. Think of a traditional data center as using doctors not only to evaluate the results of the tests, but to also run the lab tests and collect samples. With the new architecture, only the most senior people will need to be highly trained and the rest can be trained in fundamentals, similarly to how technicians and nurses make the healthcare system much more efficient.

Construction: Gigantic building with very complicated networks, cooling and control systems — or a shipping container.

Land Acquisition: This can be reduced significantly since you can locate the containers almost anywhere.

By making these key factors a lot more efficient, the relative weight of the cost of power becomes much more significant.

If you take all of these factors into account, for a similarly sized data center, the unit economics look like this: Note: I didn’t change “other” or “taxes”.

Power is going to be a large driver of unit economics

Now, compare the two.

For many tasks, off-grid data centers will outperform hyperscale data centers much more efficiently

One thing we observe is that the price of power becomes a more significant chunk of the total unit economics. This means more and more research and development will be focused on a) reducing power load, and b) finding cheaper power.

By co-locating these data centers with green energy plants, and removing the necessity for 100% uptime — these data centers can now become consumers of cheap, renewable energy all over the world. They can be deployed to monetize green energy that would otherwise go unused, or undeveloped due to lack of profitability. They can be deployed to balance grids where during off-peak hours there is an abundance of very clean, cheap power.

Basically, these new data centers can become a catalyst for large-scale green energy development that simultaneously makes data centers more profitable.

Another observation is that for similar power consumption-sized sites, the new architecture is 70% cheaper to build and after 15 years, produces a 40% more efficient operation.

Traditional and New — destined to co-exist

What’s important to note here is that traditional data centers aren’t going anywhere. We mentioned earlier how much that market is growing.

These niche high-compute markets are being deployed on current data centers. This is like using an F1 car to drive off-road. It can work, but it’s not efficient.

Specialized chips are being funded at a record pace currently. Samsung is investing over $110 billion dollars into non-memory chips. Take a look at a sample of some of the latest funding rounds (and who is funding them).

Source: https://blog.hardwareclub.co/theres-an-investment-frenzy-over-ai-chip-startups-d9b5ea42b5c4

As specialized chips and more efficient hardware gets developed, the new architecture will start to heavily outperform traditional data centers (for compute-heavy tasks). Also, as more and more manufacturers and chip makers attack these markets, the prices of the chips will become market-driven and commodity-like.

There will be a need for both kinds of data centers:

  • Traditional: Web 2.0 (Social, Video, Real-time)
  • New: Web 3.0 (Compute Intensive, AI, ML, Blockchain, Etc.)

We will prove it at Soluna

If you attend data center conferences or talk to data center experts — they are almost all focused on how to make traditional data centers more efficient. They talk about new power architectures like DC-DC power supplies, new storage equipment, new cooling technologies, and so on.

When proposing a truly off-grid data center solution, I am usually met with shock, awe and sometimes a patronizing disgust. I love it! This is mostly driven by the massive growth in these existing markets. Why bother changing everything when the current thing is in such demand!

Imagine if Elon tried to launch an electric car during the hypergrowth years of traditional internal combustion engine cars. It would have been a disaster. Back then gas was cheap, the air was relatively clean and no infrastructure existed to enable such a bold solution, and all R&D from the big players was focused on feeding the hypergrowth of gas automobiles.

The same thing is true now, there are only a few people I’ve met who even want to explore these kinds of architectures — and that is because their focus (rightfully so) is on an existing, thriving market.

This new market is growing fast, and the earth is dying fast. So, Soluna is going to prove this business is viable and ultra profitable. We’re going to build a massive wind farm in Southern Morocco — and co-locate one of the largest off-grid computing clusters. We hope this will begin to trend and we see more and more companies developing new solutions and projects using off-grid computing and green energy — enabling abundant clean energy, new technical jobs, and economic growth around the world.