<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Research on 卓琪的开发笔记</title>
    <link>https://zhuoqidev.com/categories/research/</link>
    <description>Recent content in Research on 卓琪的开发笔记</description>
    <generator>Hugo -- gohugo.io</generator>
    <language>zh-CN</language>
    <copyright>© 2026 Liu ZhuoQi</copyright>
    <lastBuildDate>Mon, 04 May 2026 00:00:00 +0000</lastBuildDate><atom:link href="https://zhuoqidev.com/categories/research/index.xml" rel="self" type="application/rss+xml" />
    
    <item>
      <title>Why LLMs Have No Memory — A Research Report Covering 67 Primary Sources</title>
      <link>https://zhuoqidev.com/en/posts/llm-memory-research/</link>
      <pubDate>Mon, 04 May 2026 00:00:00 +0000</pubDate>
      
      <guid>https://zhuoqidev.com/en/posts/llm-memory-research/</guid>
      <description>&lt;p&gt;This is not AI科普. This is a cross-validated research sprint backed by &lt;strong&gt;67 primary sources&lt;/strong&gt; — vendor docs, arXiv papers, and researcher interviews — on a question every Agent builder hits: &lt;em&gt;why don&amp;rsquo;t LLMs remember anything?&lt;/em&gt;&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;→ &lt;a href=&#34;https://zhuoqidev.com/en/projects/llm-memory-research/&#34; &gt;Full report: 14-product comparison table, 9 engineering takeaways, 3-year paradigm roadmap&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;hr&gt;&#xA;&#xA;&lt;h2 class=&#34;relative group&#34;&gt;The One-Liner&#xA;    &lt;div id=&#34;the-one-liner&#34; class=&#34;anchor&#34;&gt;&lt;/div&gt;&#xA;    &#xA;    &lt;span&#xA;        class=&#34;absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100 select-none&#34;&gt;&#xA;        &lt;a class=&#34;text-primary-300 dark:text-neutral-700 !no-underline&#34; href=&#34;#the-one-liner&#34; aria-label=&#34;Anchor&#34;&gt;#&lt;/a&gt;&#xA;    &lt;/span&gt;&#xA;    &#xA;&lt;/h2&gt;&#xA;&lt;p&gt;Four independent constraints — &lt;strong&gt;O(n²) attention + KV cache VRAM + catastrophic forgetting + GDPR right-to-be-forgotten&lt;/strong&gt; — stacked together leave &amp;ldquo;stateless&amp;rdquo; as the only viable engineering solution. Every &amp;ldquo;Memory&amp;rdquo; feature you&amp;rsquo;ve seen (ChatGPT, Claude, Cursor) is &lt;strong&gt;structured text injected into the system prompt&lt;/strong&gt;. Zero weight modification. The next 1–3 years belong to &lt;strong&gt;stateless LLM kernels + stateful Agent memory layers&lt;/strong&gt;.&lt;/p&gt;</description>
      
    </item>
    
  </channel>
</rss>
