<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>#FutureOfAI on Home</title>
    <link>https://yakinin.com/en/tags/%23futureofai/</link>
    <description>Recent content in #FutureOfAI on Home</description>
    <generator>Hugo -- 0.148.2</generator>
    <language>en</language>
    <lastBuildDate>Wed, 13 Aug 2025 18:19:14 +0000</lastBuildDate>
    <atom:link href="https://yakinin.com/en/tags/%23futureofai/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>AI Memory Isn&#39;t the End Goal—It&#39;s the Beginning of a Knowledge Marketplace</title>
      <link>https://yakinin.com/en/posts/20250813-ai-memory-knowledge-marketplace/</link>
      <pubDate>Wed, 13 Aug 2025 18:19:14 +0000</pubDate>
      <guid>https://yakinin.com/en/posts/20250813-ai-memory-knowledge-marketplace/</guid>
      <description>&lt;p&gt;Anthropic&amp;rsquo;s recent release of a &amp;ldquo;memory&amp;rdquo; function for its Claude chatbot is being framed as another move in the AI arms race to increase user stickiness. The feature allows the AI to reference past conversations when prompted, keeping projects and context continuous. While a useful feature, I believe this points to a much more fundamental shift in the AI landscape.&lt;/p&gt;
&lt;p&gt;Everything is moving toward the accumulation of user interaction data into isolated, private memory volumes. This isn&amp;rsquo;t just about convenience; it&amp;rsquo;s about creating a foundation where knowledge itself becomes private and proprietary.&lt;/p&gt;</description>
    </item>
    <item>
      <title>AI Demonstrates Higher Emotional Intelligence Than Humans</title>
      <link>https://yakinin.com/en/posts/20250523-ai-emotional-intelligence-higher/</link>
      <pubDate>Fri, 23 May 2025 00:00:00 +0000</pubDate>
      <guid>https://yakinin.com/en/posts/20250523-ai-emotional-intelligence-higher/</guid>
      <description>&lt;p&gt;A new study from the University of Geneva and the University of Bern has shown that modern language models—including ChatGPT-4, Claude 3.5, and Gemini 1.5 Flash—outperform humans in emotional intelligence tests.&lt;/p&gt;
&lt;p&gt;The average score for AI was 82% correct answers, while the average for humans was just 56%.&lt;/p&gt;
&lt;p&gt;What&amp;rsquo;s more, ChatGPT-4 didn&amp;rsquo;t just pass the test; it generated an entirely new one from scratch. This AI-created test was subsequently validated with over 400 participants and proven to be as high in quality as assessments developed by human experts over many years.&lt;/p&gt;</description>
    </item>
    <item>
      <title>A Mouse Brain, 1.6 Petabytes of Data, and the Path to AGI</title>
      <link>https://yakinin.com/en/posts/20250411-mouse-brain-digitization-ai/</link>
      <pubDate>Fri, 11 Apr 2025 00:00:00 +0000</pubDate>
      <guid>https://yakinin.com/en/posts/20250411-mouse-brain-digitization-ai/</guid>
      <description>&lt;div style=&#34;display: flex; justify-content: center; gap: 1em; flex-wrap: wrap;&#34;&gt;
  &lt;img src=&#34;https://yakinin.com/img/20250411-mouse-brain-digitization-ai-0.jpg&#34; style=&#34;max-width: 350px; width: 100%;&#34; /&gt;
&lt;/div&gt;
&lt;p&gt;Scientists recently digitized a single cubic millimeter of a mouse&amp;rsquo;s visual cortex, a project that generated 1.6 petabytes of data to map 84,000 neurons and half a billion synapses.&lt;/p&gt;
&lt;p&gt;To put that into perspective, the number of synapses in that tiny piece of brain tissue is comparable to the number of parameters in large-scale AI like DeepSeek or GPT models. It&amp;rsquo;s significantly more than the 29 billion parameters in a model like GigaChat. This comparison is a useful analogy for scale and complexity: just as synapses determine a brain&amp;rsquo;s processing capacity, parameters define the &amp;ldquo;power&amp;rdquo; of an AI.&lt;/p&gt;</description>
    </item>
  </channel>
</rss>
