<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>#LocalAI on Home</title>
    <link>https://yakinin.com/en/tags/%23localai/</link>
    <description>Recent content in #LocalAI on Home</description>
    <generator>Hugo -- 0.148.2</generator>
    <language>en</language>
    <lastBuildDate>Thu, 17 Apr 2025 00:00:00 +0000</lastBuildDate>
    <atom:link href="https://yakinin.com/en/tags/%23localai/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>OpenAI&#39;s Codex CLI: A Quiet Win for Open-Source</title>
      <link>https://yakinin.com/en/posts/20250417-openai-codex-cli-open-source/</link>
      <pubDate>Thu, 17 Apr 2025 00:00:00 +0000</pubDate>
      <guid>https://yakinin.com/en/posts/20250417-openai-codex-cli-open-source/</guid>
      <description>&lt;p&gt;OpenAI has released Codex CLI, an open-source AI agent for developers. This marks a quiet but significant victory for the open-source community.&lt;/p&gt;
&lt;p&gt;The tool allows developers to use natural language directly in the terminal—the agent interprets the request, then writes, executes, and tests the code. Most importantly, this entire process runs locally, without sending data to the cloud.&lt;/p&gt;
&lt;p&gt;With this release, the industry moves one step closer to a system that can independently understand, build, and deploy solutions. It underscores a critical point: the future isn&amp;rsquo;t just about choosing the right model, but about engineering the right architecture that connects &lt;strong&gt;thought → action&lt;/strong&gt;.&lt;/p&gt;</description>
    </item>
    <item>
      <title>DeepSeek-V3: A Quiet Release with Impressive Local Performance</title>
      <link>https://yakinin.com/en/posts/20250801-deepseek-v3-local-performance/</link>
      <pubDate>Thu, 27 Mar 2025 11:22:11 +0000</pubDate>
      <guid>https://yakinin.com/en/posts/20250801-deepseek-v3-local-performance/</guid>
      <description>&lt;div style=&#34;display: flex; justify-content: center; gap: 1em; flex-wrap: wrap;&#34;&gt;
  &lt;img src=&#34;https://yakinin.com/img/20250801-deepseek-v3-local-performance-0.jpg&#34; style=&#34;max-width: 350px; width: 100%;&#34; /&gt;
&lt;/div&gt;
&lt;p&gt;DeepSeek has once again followed its &amp;ldquo;quiet release&amp;rdquo; strategy, making its new DeepSeek-V3-0324 model available on Hugging Face without any major announcements. Instead of marketing hype, they&amp;rsquo;ve simply delivered a solution for the community to evaluate.&lt;/p&gt;
&lt;p&gt;I tested the model locally on a Mac Studio equipped with an M3 Ultra chip and saw impressive performance, generating over 20 tokens per second. This marks a significant acceleration for running capable models on local hardware, making it a viable option for developers.&lt;/p&gt;</description>
    </item>
  </channel>
</rss>
