Resource Center

Back to blog homepage

A Look Inside Soluna Computing: Our Engineering Team Answers Your Questions

keywords

AMA, Batchable Computing, Business, Computing, HPC

Welcome to the second installment of Soluna’s new AMA (Ask Me Anything) series! We recently shared a form on our social media, @SolunaHoldings, and invited you to drop questions.

In our first AMA with Michael and John, our CEOs, we answered some of your questions about our business model, financing, and hodling Bitcoin.

In this interview, Dip Patel, CTO of Soluna Computing, and Nick Lancaster, VP of Engineering, answer shareholders’ and potential investors’ most asked questions about our data center design, computing capabilities, and power efficiency.

Peter asked: have you considered running power-efficient ARM servers besides GPUs for batch computing in your data centers? Why or why not?

Dip: Peter, that’s a fantastic question. The answer is yes, we have. In fact, what we’ve learned as we really dug into the market is there’s more and more specialized computing that’s being used for very niche reasons and GPUs are a big piece and that’s growing, but there are also FPGAs, arm servers, things like that are also being used. So what’s beautiful about our data centers is that they are designed, not only the hardware and the software but the cooling and the power inside of the buildings, to accept any kind of these servers. So as the market shifts one way or another, and the need of the market shifts one way or another, we can accommodate that need, all while helping the grid and the green energy providers consume those wasted watts.

Alex asked: what percentage of your computing power is currently dedicated to ASIC crypto mining and what percentage is dedicated to other data center activities, and what will this look like as Soluna diversifies?

Dip: That’s a great question. Currently, a big majority of our mining and our computing power, in general, is ASIC crypto. We have some GPU mining. We have various kinds of ASICs, but the vast majority, especially if you look at it from a power consumption perspective, are ASICs. Our game plan is to diversify, that’s why we architected our buildings the way we did. The goal this year is to get into GPU mining at scale, as well as offer those services to AIML customers as well.

Nick: Excellent. And I think the customer will drive some of that diversity as well as we go along, right?

Dip: Yeah, man, exactly. We’ll have to call them and find out what they really need and what they want and how that works with what we need and what we want and the power that people need and what they want. It’s a beautiful Venn diagram, right, of those three things. Luckily, the intersection is very high.

Nick: That flows into another good question from Neil. He’s assuming that the mining computers can only mine cryptocurrency, and besides maybe scavenging their graphics cards, there’s not much use after you replace them before mechanical failure.

Neil’s question is: once we start to integrate computers for batch data processing, can those units be reprogrammed for the next project once they’re finished with their current operations?

Dip: First of all, Neil, you’re pretty dang correct. If you have mining rigs that are GPU-based and you want to do other things on it, typically only the graphics cards are what you can reuse. The processor, RAM, hard drive, even the power supplies, and all of that, are not optimized for having a high power processor, like high power RAM, things like that. So the only thing you can typically scavenge is the graphics cards if you’re converting from mining to other tasks, and once we get to batch-oriented, then the world’s our oyster, right? The thing is, think of it as a sport utility truck. A typical truck, like a Ford Explorer (which is the first one we had in our family), can do a lot of things.

It’s really good at a lot of things, and it’s not so good at a lot of things. There are some tasks out there where you basically have to get the best of a certain car to be competitive. So we’ll just avoid those worlds for now, and we’ll pick the worlds that aren’t so selective about whether or not you’re that specialized, where a Ford Explorer is perfectly fine. There are a lot of tasks out there that need that Ford Explorer.

Nick: The last question from this area of computing.

Roman asks: what’s our timeline for implementing a global 24/7 computer network, and how high a priority is that for Soluna?

Dip: That’s a great question. So I’ll tackle this backward because the priority defines the timing. So the answer is similar to what we just answered, about the rigs and the reprogramming. We’re going to pick tasks right now that is not so sensitive to 24/7 uptime, as we build out our global network and get to 24/7 uptime. But our priority right now is solving the needs of the energy providers and the grids. Again, no wasted watts, we want to use every megawatt. So we’re going to work on solving the biggest needs of the energy grids and the power providers, while simultaneously solving the needs of the computer. So the priorities will shift more towards getting to that 24/7 uptime. Right now, it’s not really in the top five priorities, I would say, but it’s definitely on our roadmap.

Nick: And just the character of using excess energy, we’re building infrastructure that allows us to not have to be 24/7, right? That’s what our technology is focused on right now because that’s how the power plans operate.

Dip: That’s right, Nick. Exactly. Nailed it. When we met, Nick, I was an electrical engineer, and you were a mechanical engineer, and we joked at Lockheed that the electricals generate the heat and the mechanicals have to remove it in the systems we worked on together. And it’s similar today. We’re building an MDC that’s all about moving the heat.

Somebody who was anonymous asked a great question: Computing generates a lot of heat. How does Soluna manage the thermodynamics, and are you doing something to profit from that thermo?

Nick: Cool. That’s a great question, and that’s one of the first tasks that I worked on for Soluna, even before I was full-time with them. What do we do with all this heat that all these miners are generating? Our MDCs are designed from the ground up to handle this heat and remove it from the facility and not allow it to re-ingest in front of the building. One of our goals as a green energy company is to keep our PUE down as low as possible. So we optimize for the airflow going through the building, minimal restrictions that filter, minimum turns in the air, the lowest possible velocity getting the air to the miners and supplying plenty of fresh air to the miners.

We use ambient air. So we focus on volume more than low temperature, and we flow it through the buildings and get it out of the way. So it doesn’t get re-ingested. Part of the key to managing the heat is just to get it out and away from the buildings, don’t bring it back in. We’re not currently doing anything to profit from this heat. It’s low-quality heat. The exhaust temperature of these miners isn’t incredibly high, but it’s something Dip and I have talked about before. There are definitely options out there, but right now we’re not doing any of them.

Dip: I love the way you describe some of these things. Exactly, I think what’s so elegant about our system is that it’s designed just to be efficient, right? It’s not designed to be fancy. It’s just designed to get the machines down to spec, keep them running, be easy to maintain, and operate in pretty rough environments.

Nick: Operable by anybody, right? Maintaining our buildings is simple. It’s changing the air filter in your air conditioner, basically, to maintain these buildings. Very straightforward.

Dip: That’s a great point. There’s no plumbing involved, no compression, none of that stuff. It’s really cool, man. I love it. What’s really neat is some of the systems you built at Lockheed were so complicated. It just speaks to your range, your ability to really attack the problem, not just apply technology.

Nick: It’s the solution to the problem at hand. Not to all the fun problems, but to the problem that needs to be solved. So next up, we’ve got a question for you, Dip, about IP.

Shankar asks: is there any possible IP on the data center design?

Dip: Yes. To answer your question, Shankar, there’s quite a bit of IP that we can generate, not only on the data center side but on the software side and architecture side, on the design side, and even the workflows and user experiences that we’re going to be developing. We can’t speak too deeply on it because we want to first get the patents out there, but what I can tell you is we have a strong strategy. We approach the IP by writing patents, trademarks, copyrights, trade secrets, and things that we want to keep in-house. Then once we decide on that strategy, once we have patents and we unveil what those are, then we can decide how we share those patents. Or if people need to license them or use them, that’s when that’ll come up. But the bottom line is there’s so much innovative stuff here because so much of the way we’re approaching this business is new, that we’re going to have a fun time with IP.

Nick: A nice part of attacking a new problem in a new way is there’s lots of green space, lots of areas for innovation like that.

Dip: In fact, Nick, the first patent that, provisionally, we filed is one that we co-authored. You were the lead. It’s a great question, I’m really glad it was asked. And the timing is perfect because it’s actually a focus of ours as we grow to build that engine. Well, not build it, but grow the engine now, optimize it.

So, Nick, I wanted to ask you. From our first job together, but even in our second one as founders, when we built home energy startups, security, privacy, and maintaining a level of just safety has been near and dear to our DNA. It was beaten into us as engineers through ethics and understanding the implications of what we do. It was then further beaten into us through the diligence at Lockheed, the peer reviews, and the way engineering was taught to us, the way we were trained.

So, we had a question written by Ilia that says: How is Soluna managing the physical and cybersecurity of its equipment sites and network?

And then, for an example, they gave a link here to Amazon’s controls page, which was fantastic. Thank you for sharing that, by the way. So Nick, what are your thoughts there?

Nick: Unfortunately, security is not a new and innovative space. That’s always been a challenge for governments and companies. As you said, we’ve had a lot of experience in that space, so I’ll talk about it from a technology point of view. A lot of it, because it’s not a new space, is leaning on best practices. We isolate networks, we have strong firewalls. As far as some of the stuff that the Amazon document covered with some of the controls and reliability of the system, we have redundancies in our control systems, the control systems designed to fail in the proper directions for safety and for cost reasons, and things like that.

So really when you’re building a new site, you’re taking into account all the potential issues that would affect security, which is minimizing attack vectors from a cyber side point of view, as well as having lighting in the facility for nighttime operations and fences and stuff like that. So security has to be a holistic approach, and I think we’ve done it that way. I think, Dip, you can probably discuss since you were heavily involved in the standup of Sophie, how we approach it in other ways as well.

Dip: So many security breaches happen because of a mistake made by a human, and that’s not the human’s fault. It’s a failure in the system typically. People have to choose between secure and convenient. If you could find a way to get both, you win the game. So our game plan is how do we get everyone, from the newest temp interns up to John and Michael, our CEOs, how do we get them to value security without it first becoming a problem. That’s really what operation security is all about. We have training that we put in place. We make systems that are easy to use. The other thing is, just like with the systems at Lockheed, we look at different systems and mesh stuff together and figure out other things.

I think there is value in the act of training and the culture that we’re building. We talk about the value of strong passwords, the value of rotating them, and the value of unique passwords. We also talk about articles that get shared. We talk about energy dust stuff, we talk about thermo stuff, mining stuff, and new computing security. So it’s really a core focus for our whole company. I think that’s something that you have to do when you’ve got sites in very remote areas. You have to have people that value the operational security piece.

Nick: I’ve been pleasantly surprised with the team we have, between adopting a password manager, which for a lot of people is a different technology they’re not used to having to deal with, they just want to type 1234 and be done with it. On slack, we get people slacking, hey, I got this weird email. And we reply, don’t respond to it, it looks suspicious. That’s pretty cool.

Dip: I want to thank everybody who has submitted questions, please continue to do so. We’re happy to answer them! If you haven’t had a chance to submit your question drop your questions on Twitter @SolunaHoldings and be sure to subscribe to our newsletterWe’re answering new questions there every week.