<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>#ChatGPT on Home</title>
    <link>https://yakinin.com/en/tags/%23chatgpt/</link>
    <description>Recent content in #ChatGPT on Home</description>
    <generator>Hugo -- 0.148.2</generator>
    <language>en</language>
    <lastBuildDate>Fri, 22 Aug 2025 08:23:06 +0000</lastBuildDate>
    <atom:link href="https://yakinin.com/en/tags/%23chatgpt/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>Grok&#39;s Public Chats: A Predictable AI Privacy Failure</title>
      <link>https://yakinin.com/en/posts/20250822-grok-ai-privacy-failure/</link>
      <pubDate>Fri, 22 Aug 2025 08:23:06 +0000</pubDate>
      <guid>https://yakinin.com/en/posts/20250822-grok-ai-privacy-failure/</guid>
      <description>&lt;p&gt;It’s a classic story at this point. We saw it recently with OpenAI’s ChatGPT, and now it’s Grok’s turn. Elon Musk’s xAI has inadvertently published hundreds of thousands of its users&amp;rsquo; private conversations, making them fully searchable on Google. This wasn&amp;rsquo;t a sophisticated hack; it was a fundamental product design flaw.&lt;/p&gt;
&lt;h2 id=&#34;the-feature-that-became-a-bug&#34;&gt;The Feature That Became a Bug&lt;/h2&gt;
&lt;p&gt;The mechanism was simple and naive. When a Grok user hit the &amp;ldquo;share&amp;rdquo; button to send a conversation to a colleague or friend, the system generated a unique URL. However, instead of being a private link, this URL was made public and available for search engines to index. In effect, &amp;ldquo;sharing&amp;rdquo; meant &amp;ldquo;publishing to the open web&amp;rdquo; without any warning or disclaimer.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Quality AI Isn&#39;t Free: The Real Lesson from the GPT-5 Launch</title>
      <link>https://yakinin.com/en/posts/20250813-openai-gpt-5-rollout-issues-and-psychosis/</link>
      <pubDate>Wed, 13 Aug 2025 00:00:00 +0000</pubDate>
      <guid>https://yakinin.com/en/posts/20250813-openai-gpt-5-rollout-issues-and-psychosis/</guid>
      <description>&lt;p&gt;My take on the recent discourse around GPT-5&amp;rsquo;s instability is that it&amp;rsquo;s less about a technical stumble and more about a classic release management problem, a challenge familiar to any large IT company.&lt;/p&gt;
&lt;p&gt;The more telling issue, however, is the reports that the GPT-5 model is performing worse than its predecessor. This isn&amp;rsquo;t a bug; it&amp;rsquo;s a feature of a business decision. It indicates a strategic push to make the models more cost-effective. When you&amp;rsquo;re serving 700 million users, the priority shifts from peak performance to scalable, affordable operations. The casualty in this equation is often quality.&lt;/p&gt;</description>
    </item>
    <item>
      <title>AI Demonstrates Higher Emotional Intelligence Than Humans</title>
      <link>https://yakinin.com/en/posts/20250523-ai-emotional-intelligence-higher/</link>
      <pubDate>Fri, 23 May 2025 00:00:00 +0000</pubDate>
      <guid>https://yakinin.com/en/posts/20250523-ai-emotional-intelligence-higher/</guid>
      <description>&lt;p&gt;A new study from the University of Geneva and the University of Bern has shown that modern language models—including ChatGPT-4, Claude 3.5, and Gemini 1.5 Flash—outperform humans in emotional intelligence tests.&lt;/p&gt;
&lt;p&gt;The average score for AI was 82% correct answers, while the average for humans was just 56%.&lt;/p&gt;
&lt;p&gt;What&amp;rsquo;s more, ChatGPT-4 didn&amp;rsquo;t just pass the test; it generated an entirely new one from scratch. This AI-created test was subsequently validated with over 400 participants and proven to be as high in quality as assessments developed by human experts over many years.&lt;/p&gt;</description>
    </item>
    <item>
      <title>How Often Do AI Search Engines Get It Wrong? A Sobering Look at the Data</title>
      <link>https://yakinin.com/en/posts/20253107-ai-search-accuracy-study/</link>
      <pubDate>Sat, 22 Mar 2025 23:09:57 +0000</pubDate>
      <guid>https://yakinin.com/en/posts/20253107-ai-search-accuracy-study/</guid>
      <description>&lt;div style=&#34;display: flex; justify-content: center; gap: 1em; flex-wrap: wrap;&#34;&gt;
  &lt;img src=&#34;https://yakinin.com/img/20253107-ai-search-accuracy-study-0.jpg&#34; style=&#34;max-width: 350px; width: 100%;&#34; /&gt;
&lt;/div&gt;
&lt;p&gt;A recent study by the Tow Center for Digital Journalism at Columbia University delivered a stark reality check on the current state of AI-driven search. The findings are a critical reminder that while generative AI is advancing at an incredible pace, its reliability in retrieving and citing factual information is still deeply flawed.&lt;/p&gt;
&lt;p&gt;The researchers tested eight leading AI search systems, including prominent models like ChatGPT and Google&amp;rsquo;s Gemini. The results demonstrate a significant gap between capability and accuracy.&lt;/p&gt;</description>
    </item>
  </channel>
</rss>
