<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>#OpenSourceAI on Home</title>
    <link>https://yakinin.com/en/tags/%23opensourceai/</link>
    <description>Recent content in #OpenSourceAI on Home</description>
    <generator>Hugo -- 0.148.2</generator>
    <language>en</language>
    <lastBuildDate>Wed, 27 Aug 2025 08:45:15 +0000</lastBuildDate>
    <atom:link href="https://yakinin.com/en/tags/%23opensourceai/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>DeepSeek vs. OpenAI&#39;s OSS: A Tale of Two Open-Source Models</title>
      <link>https://yakinin.com/en/posts/20250827-deepseek-vs-openai-open-source-models/</link>
      <pubDate>Wed, 27 Aug 2025 08:45:15 +0000</pubDate>
      <guid>https://yakinin.com/en/posts/20250827-deepseek-vs-openai-open-source-models/</guid>
      <description>&lt;p&gt;Two major players recently dropped new open-source models, but they represent two fundamentally different philosophies. OpenAI, the established leader, returned to the open-source scene with fanfare and its &lt;code&gt;gpt-oss-20b&lt;/code&gt; model. Shortly after, the Chinese startup DeepSeek quietly released &lt;code&gt;v3.1&lt;/code&gt;. While one was a media event, the other was a single tweet.&lt;/p&gt;
&lt;p&gt;The initial results from hands-on testing are starkly one-sided.&lt;/p&gt;
&lt;h2 id=&#34;out-of-the-box-performance-a-clear-winner&#34;&gt;Out-of-the-Box Performance: A Clear Winner&lt;/h2&gt;
&lt;p&gt;When you evaluate a model as a tool to be used right now, the comparison is not even close. Across multiple practical tests, DeepSeek v3.1 consistently delivered superior results:&lt;/p&gt;</description>
    </item>
    <item>
      <title>NVIDIA&#39;s New Open-Source Models Tackle AI&#39;s Language Gap</title>
      <link>https://yakinin.com/en/posts/20250816-nvidias-new-open-source-models-tackle-ais-language-gap/</link>
      <pubDate>Sat, 16 Aug 2025 11:49:38 +0000</pubDate>
      <guid>https://yakinin.com/en/posts/20250816-nvidias-new-open-source-models-tackle-ais-language-gap/</guid>
      <description>&lt;p&gt;The vast majority of AI development is concentrated in a handful of languages, leaving a significant capabilities gap for much of the world. NVIDIA is addressing this imbalance with a new suite of open-source models and tools designed to expand high-quality speech AI, with an initial focus on 25 European languages.&lt;/p&gt;
&lt;p&gt;This initiative moves beyond simply releasing models; it provides the foundational components for building localized, multilingual AI applications. The goal is to empower developers to create robust tools like multilingual chatbots, real-time translation services, and intelligent customer service bots for languages often overlooked by mainstream tech, including Croatian, Estonian, and Maltese.&lt;/p&gt;</description>
    </item>
    <item>
      <title>OpenAI&#39;s GPT-OSS: A Major Step Back Towards &#39;Open&#39;</title>
      <link>https://yakinin.com/en/posts/20250813-openai-gpt-oss-northflank/</link>
      <pubDate>Wed, 13 Aug 2025 15:55:16 +0000</pubDate>
      <guid>https://yakinin.com/en/posts/20250813-openai-gpt-oss-northflank/</guid>
      <description>&lt;p&gt;OpenAI just made a significant move by releasing GPT-OSS, its first truly open-source large language model family since GPT-2. With a permissive Apache 2.0 license, this isn&amp;rsquo;t just a minor release; it&amp;rsquo;s a fundamental shift that puts real power back into the hands of developers.&lt;/p&gt;
&lt;p&gt;The family includes two Mixture-of-Experts (MoE) models, gpt-oss-20b and gpt-oss-120b, designed for high-performance inference with strong reasoning capabilities.&lt;/p&gt;
&lt;h2 id=&#34;why-this-is-a-game-changer&#34;&gt;Why This Is a Game-Changer&lt;/h2&gt;
&lt;p&gt;For years, the most powerful models from OpenAI have been locked behind APIs. This meant dealing with rate limits, opaque pricing, and sending potentially sensitive data to a third party. GPT-OSS changes that equation entirely.&lt;/p&gt;</description>
    </item>
    <item>
      <title>My Take on GPT-5, OpenAI&#39;s Strategy, and the Dawn of &#39;AI Time&#39;</title>
      <link>https://yakinin.com/en/posts/20250813-chat-gpt5-open-ais-quadruple-play-birth-ai-time/</link>
      <pubDate>Wed, 13 Aug 2025 15:49:56 +0000</pubDate>
      <guid>https://yakinin.com/en/posts/20250813-chat-gpt5-open-ais-quadruple-play-birth-ai-time/</guid>
      <description>&lt;p&gt;A recent Forbes article by John Sviokla put a name to something many of us in the AI space have been feeling: the shift to &lt;strong&gt;AI Time&lt;/strong&gt;. It’s the idea that the tempo of innovation and organizational operations is no longer dictated by human speed, but by the near-instantaneous cycle of silicon intelligence. OpenAI&amp;rsquo;s GPT-5 launch is a masterclass in this new reality.&lt;/p&gt;
&lt;p&gt;It wasn&amp;rsquo;t a simple model update; it was a multi-front strategic deployment that reshapes the competitive landscape. I see it as a &amp;ldquo;quadruple play&amp;rdquo; that establishes a new baseline for the industry.&lt;/p&gt;</description>
    </item>
    <item>
      <title>OpenAI&#39;s Hand Was Forced: Why the AI Race is No Longer Won in Secret</title>
      <link>https://yakinin.com/en/posts/20250813-openai-open-source-pivot-china-ai/</link>
      <pubDate>Wed, 13 Aug 2025 15:49:27 +0000</pubDate>
      <guid>https://yakinin.com/en/posts/20250813-openai-open-source-pivot-china-ai/</guid>
      <description>&lt;p&gt;For years, the AI frontier was defined by closed doors and proprietary models. That era is officially over. OpenAI&amp;rsquo;s recent pivot to open-source isn&amp;rsquo;t just a strategic shift; it&amp;rsquo;s a direct response to a new reality: the center of AI innovation has gone public, and China is leading the charge.&lt;/p&gt;
&lt;h2 id=&#34;the-open-source-tipping-point&#34;&gt;The Open-Source Tipping Point&lt;/h2&gt;
&lt;p&gt;The catalyst was the surprise release of high-performance models by Chinese startup DeepSeek. As a recent Fortune article aptly pointed out, this move exposed a critical vulnerability in the &amp;ldquo;closed-garden&amp;rdquo; strategy of Western AI labs. By making powerful AI openly accessible, DeepSeek didn&amp;rsquo;t just win goodwill; it ignited an explosion of development across China. Companies from Baidu to Alibaba quickly followed suit, creating a tidal wave of open innovation.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Qwen-Image: A New Open-Source Challenger for AI Image Generation</title>
      <link>https://yakinin.com/en/posts/20250806-qwen-image-ai-text-generator/</link>
      <pubDate>Wed, 06 Aug 2025 00:00:00 +0000</pubDate>
      <guid>https://yakinin.com/en/posts/20250806-qwen-image-ai-text-generator/</guid>
      <description>&lt;div style=&#34;display: flex; justify-content: center; gap: 1em; flex-wrap: wrap;&#34;&gt;
  &lt;img src=&#34;https://yakinin.com/img/20250806-qwen-image-ai-text-generator-0.png&#34; style=&#34;max-width: 350px; width: 100%;&#34; /&gt;
&lt;/div&gt;
&lt;h1 id=&#34;qwen-image-a-new-open-source-challenger-for-ai-image-generation&#34;&gt;Qwen-Image: A New Open-Source Challenger for AI Image Generation&lt;/h1&gt;
&lt;p&gt;Alibaba&amp;rsquo;s Qwen Team has released Qwen-Image, a powerful, open-source AI image generator that aims to solve one of the most persistent challenges in the field: rendering crisp, accurate text within visuals. This is a significant move in a market dominated by players like Midjourney.&lt;/p&gt;
&lt;h2 id=&#34;the-core-promise-solving-text-in-ai-images&#34;&gt;The Core Promise: Solving Text in AI Images&lt;/h2&gt;
&lt;p&gt;Where many generative models falter, Qwen-Image is designed to excel at integrating text. It supports both English and Chinese, managing complex typography, multi-line layouts, and bilingual content. This opens up practical applications that are often frustrating to achieve with other tools:&lt;/p&gt;</description>
    </item>
    <item>
      <title>Google&#39;s MLE-STAR: AI Agents That Automate Machine Learning Engineering</title>
      <link>https://yakinin.com/en/posts/20250804-google-mle-star-automated-machine-learning/</link>
      <pubDate>Mon, 04 Aug 2025 00:00:00 +0000</pubDate>
      <guid>https://yakinin.com/en/posts/20250804-google-mle-star-automated-machine-learning/</guid>
      <description>&lt;h1 id=&#34;googles-mle-star-ai-agents-that-automate-machine-learning-engineering&#34;&gt;Google&amp;rsquo;s MLE-STAR: AI Agents That Automate Machine Learning Engineering&lt;/h1&gt;
&lt;p&gt;Google Cloud&amp;rsquo;s research team has unveiled MLE-STAR (Machine Learning Engineering via Search and Targeted Refinement), an AI agent system that marks a significant step toward the full automation of building ML pipelines. For anyone who has spent countless hours engineering features, selecting models, and optimizing hyperparameters, this development is worth paying close attention to.&lt;/p&gt;
&lt;p&gt;At its core, MLE-STAR moves beyond the limitations of traditional AutoML. Instead of relying on a predefined set of models and techniques, it uses an innovative approach that combines external knowledge with internal optimization.&lt;/p&gt;</description>
    </item>
    <item>
      <title>OpenAI&#39;s Codex CLI: A Quiet Win for Open-Source</title>
      <link>https://yakinin.com/en/posts/20250417-openai-codex-cli-open-source/</link>
      <pubDate>Thu, 17 Apr 2025 00:00:00 +0000</pubDate>
      <guid>https://yakinin.com/en/posts/20250417-openai-codex-cli-open-source/</guid>
      <description>&lt;p&gt;OpenAI has released Codex CLI, an open-source AI agent for developers. This marks a quiet but significant victory for the open-source community.&lt;/p&gt;
&lt;p&gt;The tool allows developers to use natural language directly in the terminal—the agent interprets the request, then writes, executes, and tests the code. Most importantly, this entire process runs locally, without sending data to the cloud.&lt;/p&gt;
&lt;p&gt;With this release, the industry moves one step closer to a system that can independently understand, build, and deploy solutions. It underscores a critical point: the future isn&amp;rsquo;t just about choosing the right model, but about engineering the right architecture that connects &lt;strong&gt;thought → action&lt;/strong&gt;.&lt;/p&gt;</description>
    </item>
    <item>
      <title>DeepSeek-V3: A Quiet Release with Impressive Local Performance</title>
      <link>https://yakinin.com/en/posts/20250801-deepseek-v3-local-performance/</link>
      <pubDate>Thu, 27 Mar 2025 11:22:11 +0000</pubDate>
      <guid>https://yakinin.com/en/posts/20250801-deepseek-v3-local-performance/</guid>
      <description>&lt;div style=&#34;display: flex; justify-content: center; gap: 1em; flex-wrap: wrap;&#34;&gt;
  &lt;img src=&#34;https://yakinin.com/img/20250801-deepseek-v3-local-performance-0.jpg&#34; style=&#34;max-width: 350px; width: 100%;&#34; /&gt;
&lt;/div&gt;
&lt;p&gt;DeepSeek has once again followed its &amp;ldquo;quiet release&amp;rdquo; strategy, making its new DeepSeek-V3-0324 model available on Hugging Face without any major announcements. Instead of marketing hype, they&amp;rsquo;ve simply delivered a solution for the community to evaluate.&lt;/p&gt;
&lt;p&gt;I tested the model locally on a Mac Studio equipped with an M3 Ultra chip and saw impressive performance, generating over 20 tokens per second. This marks a significant acceleration for running capable models on local hardware, making it a viable option for developers.&lt;/p&gt;</description>
    </item>
  </channel>
</rss>
