OpenAI’s proposal to encrypt AI is a commendable headline, but it sidesteps a more fundamental issue. Before we debate the complex philosophy of encrypting artificial intelligence, we should ask a simpler, more urgent question: have they patched the basic vulnerabilities in their existing systems?
It’s easy to forget, but OpenAI has a history of security lapses, most notably the incident that leaked private user chat histories across the internet. This wasn’t a failure of advanced cryptography; it was a foundational security bug. They created a vulnerability and, as a result, exposed their clients’ private conversations.
From an engineering and product-first perspective, the priorities here seem backward. The first principle of building a secure and trustworthy system is to identify and fix existing holes. It’s meticulous, unglamorous work. Raising a philosophical debate about encrypting AI, on the other hand, is a grand, visionary gesture.
This move fits a broader pattern. We recently saw Sam Altman admit they “totally screwed up” the GPT-5 launch, forcing a rollback. This points to a recurring gap between visionary announcements and stable, real-world execution.
It’s hard to ignore the business incentives at play. “Encrypting AI” is a narrative that can attract billions in investment. “Debugging and patching legacy code” is an operational cost. While I understand the appeal of the grand vision, true innovation is built on a foundation of reliability.
Before we build a cryptographic vault for the AI, we need to be sure the building it’s in is secure. Let’s focus on solid engineering and fixing today’s problems before raising capital on tomorrow’s solutions.