<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>#GoogleAI on Home</title>
    <link>https://yakinin.com/en/tags/%23googleai/</link>
    <description>Recent content in #GoogleAI on Home</description>
    <generator>Hugo -- 0.148.2</generator>
    <language>en</language>
    <lastBuildDate>Fri, 05 Sep 2025 19:54:00 +0000</lastBuildDate>
    <atom:link href="https://yakinin.com/en/tags/%23googleai/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>Google&#39;s EmbeddingGemma: A New Contender for On-Device RAG</title>
      <link>https://yakinin.com/en/posts/20250905-google-embeddinggemma-on-device-rag/</link>
      <pubDate>Fri, 05 Sep 2025 19:54:00 +0000</pubDate>
      <guid>https://yakinin.com/en/posts/20250905-google-embeddinggemma-on-device-rag/</guid>
      <description>&lt;p&gt;I usually default to OpenAI for embeddings, but Google&amp;rsquo;s new EmbeddingGemma model is a noteworthy development. It’s not just another model; it’s a strategic move that shows real promise for improving Retrieval-Augmented Generation (RAG) pipelines, especially in on-device and edge applications.&lt;/p&gt;
&lt;h2 id=&#34;what-is-embeddinggemma&#34;&gt;What is EmbeddingGemma?&lt;/h2&gt;
&lt;p&gt;Google has released EmbeddingGemma as a lightweight, efficient, and multilingual embedding model. At just 308M parameters, it’s designed for high performance in resource-constrained environments. This isn&amp;rsquo;t just about making a smaller model; it&amp;rsquo;s about making a &lt;em&gt;capable&lt;/em&gt; small model.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Google&#39;s AI Coding Agent &#39;Jules&#39; Launches Publicly, Powered by Gemini 2.5</title>
      <link>https://yakinin.com/en/posts/20250813-jules-public-launch/</link>
      <pubDate>Wed, 13 Aug 2025 15:41:50 +0000</pubDate>
      <guid>https://yakinin.com/en/posts/20250813-jules-public-launch/</guid>
      <description>&lt;div style=&#34;display: flex; justify-content: center; gap: 1em; flex-wrap: wrap;&#34;&gt;
  &lt;img src=&#34;https://yakinin.com/img/20250813-jules-public-launch-0.webp&#34; style=&#34;max-width: 350px; width: 100%;&#34; /&gt;
&lt;/div&gt;
&lt;p&gt;Google has officially moved its asynchronous coding agent, Jules, out of beta and into public availability. The key upgrade is its new engine: Gemini 2.5 Pro, which Google claims enhances its ability to generate high-quality code by first developing a structured plan.&lt;/p&gt;
&lt;h2 id=&#34;from-beta-to-public-launch&#34;&gt;From Beta to Public Launch&lt;/h2&gt;
&lt;p&gt;The public launch follows a substantial beta period where thousands of developers tackled tens of thousands of tasks, resulting in over 140,000 code improvements. This feedback has been used to refine the platform, leading to several key enhancements:&lt;/p&gt;</description>
    </item>
    <item>
      <title>Google&#39;s MLE-STAR: AI Agents That Automate Machine Learning Engineering</title>
      <link>https://yakinin.com/en/posts/20250804-google-mle-star-automated-machine-learning/</link>
      <pubDate>Mon, 04 Aug 2025 00:00:00 +0000</pubDate>
      <guid>https://yakinin.com/en/posts/20250804-google-mle-star-automated-machine-learning/</guid>
      <description>&lt;h1 id=&#34;googles-mle-star-ai-agents-that-automate-machine-learning-engineering&#34;&gt;Google&amp;rsquo;s MLE-STAR: AI Agents That Automate Machine Learning Engineering&lt;/h1&gt;
&lt;p&gt;Google Cloud&amp;rsquo;s research team has unveiled MLE-STAR (Machine Learning Engineering via Search and Targeted Refinement), an AI agent system that marks a significant step toward the full automation of building ML pipelines. For anyone who has spent countless hours engineering features, selecting models, and optimizing hyperparameters, this development is worth paying close attention to.&lt;/p&gt;
&lt;p&gt;At its core, MLE-STAR moves beyond the limitations of traditional AutoML. Instead of relying on a predefined set of models and techniques, it uses an innovative approach that combines external knowledge with internal optimization.&lt;/p&gt;</description>
    </item>
  </channel>
</rss>
