I’m often asked which AI training project cost millions of dollars and two years of my life. People wonder: why is it so expensive?

My usual answer is that it’s not particularly expensive—especially considering we don’t own our own hardware yet. Training AI has always been about massive data centers; that’s just the reality of the field. When you’re not immersed in it, the sheer scale can be hard to visualize.

Today, I came across a compelling article that puts this into perspective. For those who haven’t heard, Elon Musk is launching the ‘Gigafactory of Compute’ as part of his xAI project. This is a literal factory for training artificial intelligence, and the scale is immense.

Take a look at the photo. A single data center like this already consumes enough energy to power 300,000 homes. That’s only the first stage, and it’s just one of several planned facilities.

Here’s a brief look at Musk’s project:

  • xAI is Elon Musk’s independent company, positioned as an alternative to OpenAI and Google DeepMind.
  • The Gigafactory of Compute is the massive infrastructure being built to power it, utilizing 200,000 GPUs and powered by Tesla batteries.
  • Musk has stated that the second phase of the project will consume up to 300 MW of energy, which is comparable to the power consumption of an entire city.

The goal is clear: to create a new level of computational power that is not dependent on Amazon, Microsoft, or Google. So, when people question the cost of our work, this is the context. This is the scale required to operate at the forefront of AI development.