RAG vs LLM Wiki vs Plain Text — A Decision Framework for Agent Long-Term Memory
·1234 words·
6 min
Every Agent builder hits this question eventually: where do I store user data so the agent remembers it next session?
Three approaches dominate the landscape: RAG (vector retrieval), LLM Wiki (structured knowledge injection), and plain-text context memory (the CLAUDE.md / Cursor Rules pattern). Each has vocal advocates. But picking wrong is expensive — do RAG too light and it’s a noise generator; do plain text too heavy and it’s a token incinerator.