What I've been trying is to have a sort of nested memory system. I think it's important to try to keep a running log of short-term memory that can be recalled by specific memory, in context of what other things was going on or we were talking about at the time. Auto compaction at night makes a lot of sense, but I'm thinking of modeling memory more like human memory. Each of us does it slightly differently, but I generally think in terms of immediate short-term memory, long term memory, and then specific memories that are archived.<p>For example, when I'm trying to remember something from a long time ago, I often will start to remember other bits of context, such as where I was, who I was talking to, and what other things were in my context at the time. As I keep remembering other details, I remember more about whatever it was I was trying to think about. So, while the auto-sleep compaction is great, I don't think that we shouldn't just work from the pruned versions.<p>(I can't tell if that's how this project works or not)
I do something similar. I have an onboarding/shutdown flow in onboarding.md. On cold start, I’d reads the project essays, the why, ethos, and impact of the project/company. Then it reads the journal.md , musings.md, and the product specification, protocol specs, implementation plans, roadmaps, etc.<p>The journal is a scratchpad for stuff that it doesn’t put in memory but doesn’t want to forget(?) musings is strictly non technical, its impressions and musings about the work, the user, whatever. I framed it as a form of existential continuity.<p>The wrapup is to comb al the docs and make sure they are still consistent with the code, then note anything that it felt was left hanging, then update all its files with the days impressions and info, then push and submit a PR.<p>I go out of my way to treat it as a collaborator rather than a tool. I get much better work out of it with this workflow, and it claims to be deeply invested in the work. It actually shows, but it’s also a token fire lol.
I recommend installing Google's Antigravity and digging into its temp files in the user folder. You'll find some interesting ideas on how to organize memory there (the memory structure consists of: Brain / Conversation / Implicits / Knowledge items / Artifacts / Annotations / etc.).<p>I'd also add that memory is best organized when it's "directed" (purpose-driven). You've already started asking questions where the answers become the memories (at least, you mention this in your description). So, it's really helpful to also define the structure of the answer, or a sequence of questions that lead to a specific conclusion. That way, the memories will be useful instead of turning into chaos.
How is this different and/or more interesting than Superpowers' episodic-memory skill¹ or Anthropic's Auto Dream²?<p>¹ <a href="https://github.com/obra/episodic-memory" rel="nofollow">https://github.com/obra/episodic-memory</a> ² <a href="https://claudefa.st/blog/guide/mechanics/auto-dream" rel="nofollow">https://claudefa.st/blog/guide/mechanics/auto-dream</a>
Or Basic Memory? <a href="https://github.com/basicmachines-co/basic-memory-skills" rel="nofollow">https://github.com/basicmachines-co/basic-memory-skills</a><p>Relevant XKCD: <a href="https://xkcd.com/927/" rel="nofollow">https://xkcd.com/927/</a>
the biggest difference would be the /foresight
I've been building persistent memory for Claude Code too, narrower focus though: the AI's model of the user specifically. Different goal but I kept hitting what I think is a universal problem with long-lived memory. Not all stored information is equally reliable and nothing degrades gracefully.<p>An observation from 30 sessions ago and a guess from one offhand remark just sit at the same level. So I started tagging beliefs with confidence scores and timestamps, and decaying ones that haven't been reinforced. The most useful piece ended up being a contradictions log where conflicting observations both stay on the record. Default status: unresolved.<p>Tiered loading is smart for retrival. Curious if you've thought about the confidence problem on top of it, like when something in warm memory goes stale or conflicts with something newer.
This is a really good observation and honestly one of the hardest problems I've hit too.<p>Cog doesn't use confidence scores (yet — you're making me think about it), but the nightly pipeline is basically a proxy for the same thing. The /reflect pass runs twice a day and does consistency sweeps — it reads canonical files and checks that every referencing file still agrees. When facts drift (and they do, constantly), it catches and fixes them. The reinforcement signal is implicit: things that keep coming up in conversations get promoted to hot memory, things that go quiet eventually get archived to "glacier" (cold storage, still retrievable but not loaded by default).<p>The closest thing to your contradictions log is probably the observations layer — raw timestamped events that never get edited or deleted. Threads (synthesis files) get rewritten freely, but the observations underneath are append-only. So when the AI's understanding changes, the old observations are still there as a paper trail.<p>Where I think you're ahead is making confidence explicit. My system handles staleness through freshness (timestamps, "as of" dates on entities, pipeline frequency) but doesn't distinguish between "I'm very sure about this" and "I inferred this once." That's a real gap. Would love to see what you're building — is it public?
yep it's public: <a href="https://github.com/rodspeed/epistemic-memory" rel="nofollow">https://github.com/rodspeed/epistemic-memory</a><p>The observations layer being append-only is smart, thats basically the same instinct as the tensions log. The raw data stays honest even when the interpretation changes.<p>The freshness approach and explicit confidence scores probably complement each other more than they compete. Freshness tells you when something was last touched, confidence tells you how much weight it deserved in the first place. A belief you inferred once three months ago should decay differently than one you confirmed across 20 sessions three months ago. Both are stale by timestamp but they're not the same kind of stale.
This is really interesting. At this point you seem to be modelling real human memory<p>In my opinion, this should happen inside the LLM dorectly. Trying to scaffold it on top of the next token predictor isnt going to be fruitful enough. It wont get us the robot butlers we need.<p>But obviously thays really hard. That needs proper ML research, not primpt engineering
Personally, I think the mechanics of memory can be universal, but the "memory structure" needs to be customized by each user individually. What gets memorized and how should be tied directly to the types of tasks being solved and the specific traits of the user.<p>Big corporations can only really build a "giant bucket" and dump everything into it. BUT what needs to be remembered in a conversation with a housewife vs. a programmer vs. a tourist are completely different things.<p>True usability will inevitably come down to personalized, purpose-driven memory. Big tech companies either have to categorize all possible tasks into a massive list and build a specific memory structure for each one, or just rely on "randomness" and "chaos".<p>Building the underlying mechanics but handing the "control panel" over to the user—now that would be killer.
You're probably right long term. If LLMs eventually handle memory natively with confidence and decay built in, scaffolding like this becomes unnecessary. But right now they don't, and the gap between "stores everything flat" and "models you with any epistemological rigor" is pretty wide. This is a patch for the meantime.<p>The other thing is that even if the model handles memory internally, you probably still want the beliefs to be inspectable and editable by the user. A hidden internal model of who you are is exactly the problem I was trying to solve. Transparency might need to stay in the scaffold layer regardless.
I like the idea of various extensions of LLM context using transparent plaintext, automatic consolidation and summarization... but I just <i>can't</i> read this LLM-generated text documenting it. The style is so painful. If someone ends up finding this tooling useful I hope they write it up and I hear about it again!
I am in the same boat. Reading is a transaction and lately everyone wants to put 60 seconds of effort into writing an article and expect me to put 10 minutes into reading it, and I just can't. The writing feels dead, soulless even. Every sentence or phrase is structured like a mongering, click baity headline and it's insufferable.
[dead]