<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>#GPUInfrastructure on Home</title>
    <link>https://yakinin.com/en/tags/%23gpuinfrastructure/</link>
    <description>Recent content in #GPUInfrastructure on Home</description>
    <generator>Hugo -- 0.148.2</generator>
    <language>en</language>
    <lastBuildDate>Fri, 09 May 2025 00:00:00 +0000</lastBuildDate>
    <atom:link href="https://yakinin.com/en/tags/%23gpuinfrastructure/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>Why AI Training Costs Millions: A Look at the &#39;Gigafactory of Compute&#39;</title>
      <link>https://yakinin.com/en/posts/20250509-elon-musk-xai-gigafactory-compute/</link>
      <pubDate>Fri, 09 May 2025 00:00:00 +0000</pubDate>
      <guid>https://yakinin.com/en/posts/20250509-elon-musk-xai-gigafactory-compute/</guid>
      <description>&lt;div style=&#34;display: flex; justify-content: center; gap: 1em; flex-wrap: wrap;&#34;&gt;
  &lt;img src=&#34;https://yakinin.com/img/20250509-elon-musk-xai-gigafactory-compute-0.jpeg&#34; style=&#34;max-width: 350px; width: 100%;&#34; /&gt;
&lt;/div&gt;
&lt;p&gt;I&amp;rsquo;m often asked which AI training project cost millions of dollars and two years of my life. People wonder: why is it so expensive?&lt;/p&gt;
&lt;p&gt;My usual answer is that it&amp;rsquo;s not particularly expensive—especially considering we don&amp;rsquo;t own our own hardware yet. Training AI has always been about massive data centers; that&amp;rsquo;s just the reality of the field. When you&amp;rsquo;re not immersed in it, the sheer scale can be hard to visualize.&lt;/p&gt;</description>
    </item>
  </channel>
</rss>
