14 comments

  • fidorka27 minutes ago
    Nice work - the daily-log-first approach resonates. We hit the same &quot;re-onboarding Claude every morning&quot; problem and went a different direction with MemoryLane (<a href="https:&#x2F;&#x2F;github.com&#x2F;deusXmachina-dev&#x2F;memorylane" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;deusXmachina-dev&#x2F;memorylane</a>): it sits in the background capturing your activity (screenshots + OCR + summaries) and makes that context available to any AI chat via MCP.<p>Different tradeoff - Total Recall is curated (daily logs) and lean, MemoryLane for now captures broadly and relies on search to surface what&#x27;s relevant. I think they are complementary: your write gate keeps project knowledge tight, broad context with search fills in the &quot;wait, what tab&#x2F;webpage was that&quot; gaps.<p>We applied to this batch (YC S26), still waiting on Apple notarization before a wider release. Happy to chat if anyone&#x27;s curious about the screen-context approach.
  • andershaig1 hour ago
    I&#x27;m going a bit broader with my system but it&#x27;s similar. Instead of trying to predict future behavior, I&#x27;m focusing on capturing new, meaningful information, preparing it for human review (no fully automatic writes) and pairing with an ongoing review process, deduplication and treating it like a resource I manually curate (with help to make that easier of course).<p>I think context needs to be treated primarily globally with the addition of some project specific data. For example, my coding preferences don&#x27;t change between projects, nor does my name. Other data like the company I work for can change, but infrequently and some values change frequently and are expected to.<p>I also use Claude Code a lot more for thinking or writing these days so there&#x27;s less separation between topics than there are in separate repos.
  • 4b11b41 hour ago
    I&#x27;m a fan human pruning (which you refer to as &quot;write gates&quot;).<p>At this point, I&#x27;d argue any &quot;automated memory&quot; is a failure over time. I don&#x27;t know if I&#x27;ll be convinced otherwise.<p>If one were to go full automation, then I&#x27;d want generations of what to prune (&quot;I haven&#x27;t used this memory in a while... &quot;). Actually, that might be useful in your tool as well. &quot;Remove-gates&quot;?
  • sibellavia1 hour ago
    I built something similar for my needs[1]. I really like your idea of having write-gated memories.<p>[1] <a href="https:&#x2F;&#x2F;github.com&#x2F;sibellavia&#x2F;dory" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;sibellavia&#x2F;dory</a>
  • ejae_dev1 hour ago
    the daily-log-first approach is a smart design choice — it decouples capture from commitment. curious about one edge case though: what happens when two parallel claude code sessions write to the same daily log simultaneously? with long-running tasks it&#x27;s common to have multiple sessions open on the same project, and the daily logs are plain markdown files with no locking.<p>also, how consistent is the model at applying the five-point write gate? &quot;does this change future behavior&quot; feels clear in theory, but in practice i&#x27;d expect the model to be overly conservative (dropping things that matter) or overly permissive (saving trivia that felt important mid-session) depending on context. have you noticed any patterns in what slips through vs what gets filtered incorrectly?
  • agluszak1 hour ago
    Such a feature is very much needed, but I&#x27;ll wait for the first-party, battle-tested solution
  • Terretta1 day ago
    Love the idea and the approach, both. Intend to try it.<p>Dislike READMEs written by or with the LLM.<p>They&#x27;re filled with tropes and absolutes, as if written by a PdM to pitch his budget rather than for engineers needing to make it work.<p>At some point, reading these becomes like hearing a rhythm in daily life (riding subway, grinding espresso, typing in a cafe) and a rhythm-matched earwig inserts itself in your brain. Now you aren&#x27;t hearing whatever made that rhythmic sound, but the unrelated earwig. Trying to think about your README or AskHN, instead thinking about LLM slop and confabulation and trust.<p>(Just like new account ChatEngineer&#x27;s comment opening with <i>“The [hard thing] is real.”</i> which leaves the otherwise natural rest of comment more sus.)<p>Unsolicited opinion: Shouldn&#x27;t be called total-recall, but curated-recall or selective-recall (prefer this one) or partial-recall or anything that doesn&#x27;t confuse users or LLM as to (trope incoming, because where did they learn tropes anyway?): <i>what it is (and isn&#x27;t)</i> …
    • davegoldblatt16 hours ago
      roger that! good point on the readme, was aiming for velocity of shipping. will re work
      • rco87862 hours ago
        Just +1ing on the readme. It&#x27;s clearly an LLM dump that repeats a lot of the same points multiple times and is missing a very, very important section:<p>How to use Total Recall. It explains all of the concepts, but lacks a &quot;quick start&quot; section for someone who is ready to get up and running.
        • DonHopkins2 hours ago
          I want an in-depth tutorial, so I can really wrap my head around it, because it blows my mind!<p>How to Learn Total Recall in Two Weeks:<p><a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=-KFH--_cdiI" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=-KFH--_cdiI</a>
  • iamleppert49 minutes ago
    I just want an infinite scroll in Claude Code, where I can easily search and view all my past conversations. Is that asking too much?
  • dhruv30064 hours ago
    I like the name total recall haha!
  • nunobrito4 days ago
    From a first read, the memory folder should also go into .gitignore by default
    • davegoldblatt4 days ago
      Good catch. I agree the safe default is to ignore memory&#x2F; since it can contain personal notes, people context, and daily logs. I’m updating the installer to add memory&#x2F; to .gitignore by default (along with CLAUDE.local.md and .claude&#x2F;settings.local.json).<p>For teams that do want shared context, I’ll document a “team mode” gitignore pattern that commits only selected registers (e.g. decisions&#x2F;projects) while keeping daily logs + preferences&#x2F;people local.
      • nunobrito4 days ago
        Thanks, thinking again it was indeed safer to keep as hidden by default.<p>I&#x27;ve been using it since yesterday, it is doing it work as intended without getting in the way. So far, works great.
        • davegoldblatt2 days ago
          great to hear! lmk if you have any feedback
    • davegoldblatt4 days ago
      done <a href="https:&#x2F;&#x2F;github.com&#x2F;davegoldblatt&#x2F;total-recall&#x2F;commit&#x2F;152ab12" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;davegoldblatt&#x2F;total-recall&#x2F;commit&#x2F;152ab12</a>
  • ChatEngineer4 days ago
    [flagged]
    • chrisss3953 hours ago
      Do you do anything to manage the size of this memory or is that left to the user? I think it would be interesting to make it behave more like real memory and have a way to degrade gracefully, e.g., consolidate old memories to key words only?<p>Random Tuesday thoughts. Neat work!
    • davegoldblatt3 days ago
      thanks! lmk if you have any feedback
      • ChatEngineer1 day ago
        Using something similar in my OpenClaw setup — hierarchical memory with L1&#x2F;L2&#x2F;L3 layers plus a &quot;fiscal&quot; agent that audits before writes. The gate is definitely the right abstraction.<p>One thing I&#x27;ve noticed: the &quot;distillation&quot; decision (what deserves L2 vs stays in L1) benefits from a second opinion. I have my fiscal check summaries against a simple rubric: &quot;Does this change future behavior?&quot; If yes, promote. If no, compact and discard.<p>Curious if you&#x27;ve experimented with read-time context injection vs just the write gate? I&#x27;ve found pulling relevant L2 into prompt context at session start helps continuity, but there&#x27;s a tradeoff on token burn.<p>Great work — this feels like table stakes infrastructure for 2026 agents.