Why Neoclouds Are Vital to AI Startups
Commodity compute is key to making new AI research breakthroughs. AI startups and Nvidia have allied with financial firms to prevent this commodity from being monopolized by incumbent software giants.

The recent growth in optimism about artificial intelligence technology has led to a boom in demand for the computing power or “compute” that these new AI algorithms require for training i.e. development and inference i.e. usage. The high compute demands for AI are met through chips like Nvidia’s “graphics processing units” (GPUs), which are capable of processing vast amounts of computations in parallel. While GPUs and their equivalents from other companies like AMD or Google can be purchased and run on a company’s own premises in what is now commonly called a “data center” or “compute cluster,” the financial costs and organizational complexities of building such data centers, maintaining them, and inevitably upgrading them with new and more powerful chips are prohibitively high even apparently for well-funded frontier AI labs like OpenAI or Anthropic. As a result, many companies and individuals choose instead to rent compute from “cloud computing” providers like Amazon Web Services (AWS) or Microsoft Azure and, alongside these large and established companies, a number of smaller and newer “neocloud” companies have arisen to meet demand specifically for remotely renting out AI compute.

