By Nicola Phillips, Contributing Writer
Last fall, OpenAI released a demo version of ChatGPT, provoking the release of dozens of other chatbots that have been in development for many years. These chatbots are redefining the way we use the Internet. Unlike the search engines that have been the dominant portals to Internet sites and information since the mid-90s, chatbots use artificial intelligence (AI) to deliver natural language responses to queries, skipping the intermediate step of sending you to relevant websites.
In the span of less than a year, AI has become mainstream, and is increasingly integrated into everyday life. There is lots of excitement about the opportunities for AI to enrich our lives, and also lots of concerns. Like any significant emerging technology, AI presents potentially huge rewards, and carries huge risks. One arena where the risks and rewards of AI are particularly pronounced is climate change mitigation and adaptation.
Advanced analytics and artificial intelligence can construct and interpret large, complex datasets in minutes, greatly enhancing our ability to make data-driven decisions on a number of things, including climate change.
AI has the potential to offer new solutions to climate-related challenges, including flood risk analysis, geospatial mapping and precision agriculture.
At the same time, the anticipated exponential growth in applications that utilize AI will dramatically increase the computing demands and commensurate energy demands of data centers in the next few years. How do we reconcile the climate externalities of AI processing with the potential climate benefits of AI analytics?
Applications like ChatGPT use a technology called large language models (LLMs), deep-learning algorithms that scrape vast amounts of text data from the corners of the Internet and then make sense of it.
LLMs process millions of calculations in order to produce the algorithms and train the models that give these machines near-human intelligence. Processing these calculations requires a plethora of data centers that use prodigious amounts of energy.
But LLMs and other AI processing are fundamentally different from many of the applications currently running in data centers. Much of what we use devices for day-to-day requires continuous computing to support — search engines, most apps, Google Maps, any kind of social media. But large language models — and indeed any type of algorithm that would be classified as artificial intelligence — can be trained in batches, rather than individually or continuously.
Processing large volumes of data at one time is computation-heavy but highly flexible. Between batches, hardware can be shut down without damaging the integrity of the applications. There is no need for 24/7 connecting power.
One huge advantage of batchable computing is that it can be accomplished with data centers that are powered by renewable energy.
This is the model that drives the Soluna Computing solution. Soluna’s data centers run batchable computing, allowing them to run during off-peak hours, when the energy being produced would otherwise go to waste. Soluna buys this otherwise curtailed energy and uses it to power compact, uniquely-designed data centers.
In order to realize the climate benefits promised by AI without giving back those gains by generating massive new energy loads on the grid, we need a new kind of data center.
Check out the next installment of this series on AI, entitled ChatGPT: The Mosaic Moment of AI.