Nice work - the daily-log-first approach resonates. We hit the same "re-onboarding Claude every morning" problem and went a different direction with MemoryLane (<a href="https://github.com/deusXmachina-dev/memorylane" rel="nofollow">https://github.com/deusXmachina-dev/memorylane</a>): it sits in the background capturing your activity (screenshots + OCR + summaries) and makes that context available to any AI chat via MCP.<p>Different tradeoff - Total Recall is curated (daily logs) and lean, MemoryLane for now captures broadly and relies on search to surface what's relevant. I think they are complementary: your write gate keeps project knowledge tight, broad context with search fills in the "wait, what tab/webpage was that" gaps.<p>We applied to this batch (YC S26), still waiting on Apple notarization before a wider release. Happy to chat if anyone's curious about the screen-context approach.
I'm going a bit broader with my system but it's similar. Instead of trying to predict future behavior, I'm focusing on capturing new, meaningful information, preparing it for human review (no fully automatic writes) and pairing with an ongoing review process, deduplication and treating it like a resource I manually curate (with help to make that easier of course).<p>I think context needs to be treated primarily globally with the addition of some project specific data. For example, my coding preferences don't change between projects, nor does my name. Other data like the company I work for can change, but infrequently and some values change frequently and are expected to.<p>I also use Claude Code a lot more for thinking or writing these days so there's less separation between topics than there are in separate repos.
I'm a fan human pruning (which you refer to as "write gates").<p>At this point, I'd argue any "automated memory" is a failure over time. I don't know if I'll be convinced otherwise.<p>If one were to go full automation, then I'd want generations of what to prune ("I haven't used this memory in a while... "). Actually, that might be useful in your tool as well. "Remove-gates"?
I built something similar for my needs[1]. I really like your idea of having write-gated memories.<p>[1] <a href="https://github.com/sibellavia/dory" rel="nofollow">https://github.com/sibellavia/dory</a>
the daily-log-first approach is a smart design choice — it decouples capture from commitment. curious about one edge case though: what happens when two parallel claude code sessions write to the same daily log simultaneously? with long-running tasks it's common to have multiple sessions open on the same project, and the daily logs are plain markdown files with no locking.<p>also, how consistent is the model at applying the five-point write gate? "does this change future behavior" feels clear in theory, but in practice i'd expect the model to be overly conservative (dropping things that matter) or overly permissive (saving trivia that felt important mid-session) depending on context. have you noticed any patterns in what slips through vs what gets filtered incorrectly?
Such a feature is very much needed, but I'll wait for the first-party, battle-tested solution
Love the idea and the approach, both. Intend to try it.<p>Dislike READMEs written by or with the LLM.<p>They're filled with tropes and absolutes, as if written by a PdM to pitch his budget rather than for engineers needing to make it work.<p>At some point, reading these becomes like hearing a rhythm in daily life (riding subway, grinding espresso, typing in a cafe) and a rhythm-matched earwig inserts itself in your brain. Now you aren't hearing whatever made that rhythmic sound, but the unrelated earwig. Trying to think about your README or AskHN, instead thinking about LLM slop and confabulation and trust.<p>(Just like new account ChatEngineer's comment opening with <i>“The [hard thing] is real.”</i> which leaves the otherwise natural rest of comment more sus.)<p>Unsolicited opinion: Shouldn't be called total-recall, but curated-recall or selective-recall (prefer this one) or partial-recall or anything that doesn't confuse users or LLM as to (trope incoming, because where did they learn tropes anyway?): <i>what it is (and isn't)</i> …
I just want an infinite scroll in Claude Code, where I can easily search and view all my past conversations. Is that asking too much?
I like the name total recall haha!
From a first read, the memory folder should also go into .gitignore by default
Good catch. I agree the safe default is to ignore memory/ since it can contain personal notes, people context, and daily logs. I’m updating the installer to add memory/ to .gitignore by default (along with CLAUDE.local.md and .claude/settings.local.json).<p>For teams that do want shared context, I’ll document a “team mode” gitignore pattern that commits only selected registers (e.g. decisions/projects) while keeping daily logs + preferences/people local.
Thanks, thinking again it was indeed safer to keep as hidden by default.<p>I've been using it since yesterday, it is doing it work as intended without getting in the way. So far, works great.
done <a href="https://github.com/davegoldblatt/total-recall/commit/152ab12" rel="nofollow">https://github.com/davegoldblatt/total-recall/commit/152ab12</a>
[flagged]
Do you do anything to manage the size of this memory or is that left to the user? I think it would be interesting to make it behave more like real memory and have a way to degrade gracefully, e.g., consolidate old memories to key words only?<p>Random Tuesday thoughts. Neat work!
thanks! lmk if you have any feedback
Using something similar in my OpenClaw setup — hierarchical memory with L1/L2/L3 layers plus a "fiscal" agent that audits before writes. The gate is definitely the right abstraction.<p>One thing I've noticed: the "distillation" decision (what deserves L2 vs stays in L1) benefits from a second opinion. I have my fiscal check summaries against a simple rubric: "Does this change future behavior?" If yes, promote. If no, compact and discard.<p>Curious if you've experimented with read-time context injection vs just the write gate? I've found pulling relevant L2 into prompt context at session start helps continuity, but there's a tradeoff on token burn.<p>Great work — this feels like table stakes infrastructure for 2026 agents.