14 comments

  • semiquaver9 hours ago
    Hmm, this is a case where HN’s title mangling changed the meaning of the title. Lower case delta (δ) is used intentionally. I don’t think HN should automatically modify the casing of non-ascii chars.
    • setopt9 hours ago
      Even for ASCII chars, nomenclature in math and physics is usually case-sensitive.
    • cwillu7 hours ago
      Email hn@ycombinator.com and they'll fix it.
    • airstrike7 hours ago
      The submitter has a grace period of a few minutes to edit the title after submitting, so there's no need to change what HN does
      • realitysballs5 hours ago
        True, but wouldn’t it be better long term if website automation didn’t create unintended new meanings to Titles? title’s matter
        • airstrike2 hours ago
          Only if you assume it doesn't ever work as intended.
        • throw12345678912 hours ago
          indeed, titles matter
  • djoldman6 hours ago
    I would love for the standard to be to ALWAYS report the required amount of memory to load and run a model in bytes of RAM alongside any other metrics. I&#x27;d love to see time to first token, token throughput, token latency as well but I&#x27;d settle for memory size as described above.<p>Essentially, many people want to know what the minimum amount of memory is to run a particular model.<p>Parameter count obscures important details: what are the sizes of the parameters? A parameter isn&#x27;t rigorously defined. This also gets folks into trouble because a 4B param model with FP16 params is very different from a 4B param model with INT4 params. The former <i>obviously</i> should be a LOT better than the second.<p>This would also help with MOE models: if memory is my constraint, it doesn&#x27;t matter if the (much larger RAM required) MOE version is faster or has better evals.<p>I&#x27;m waiting for someone in anger to ship the 1 parameter model where the parameter according to pytorch is a single parameter of size 4GB.
  • usernametaken2912 hours ago
    &gt; δ-mem compresses past information into a fixed-size state matrix updated by delta-rule learning<p>This doesn’t solve the capacity problem of memory. You can cram more into one context window, but then again you need to associate them with input queries. That’s very hard because slight variations in input create hugely different activations. So really, it doesn’t improve caching. This paper might do a thing or two approximating the compression limit for context windows, but there’s a fundamental limit on how much information can go into it. What you really need is contextual search, as in, different events and objects with the same abstractions and semantic lead to same response, so you can cache effectively… on this front the paper does little to improve “memory” in a meaningful way
    • in-silico2 hours ago
      While there is a limit to the amount of information you can fit in a fixed-size state, the theoretical ceiling is pretty high.<p>A Hebbian associative matrix (one of the simplest and weakest memory constructions) can store about 0.7 bits of information per parameter. If you have a state with 300M parameters (the size of a Llama 3 8B KV cache at 10K context length), and a context with 2.1 bits of entropy per token (a reasonable estimate), then the state can encode <i>100M tokens worth of information</i>.<p>Real models obviously aren&#x27;t powerful enough to operate at the limit, but you can see why this is a promising research direction.
      • usernametaken291 hour ago
        While 100 million tokens sounds a lot, think about it for a bit, and you’ll see why it is basically nothing. Try to cram a human lifetime of sounds, smells, video and more sensory data into 100 million tokens. Heck, try to process the video plot of a single series into that window. It just won’t work, it won’t scale, and is laughable compared to contextual memory. I’m not saying that to belittle the authors of the paper but the reality is that this has very little to do with transient long term memory.
        • in-silico34 minutes ago
          I think you underestimate just how much information 100M words-ish of information is. It&#x27;s like a 300,000 page novel. That&#x27;s a 50 foot (~15 meter) thick book.<p>Surely with (much less than) 300K pages you could describe every meaningful detail of a video series&#x27; plot. You don&#x27;t need to remember the exact pixel values.<p>You can also scale the numbers up. I specifically chose a relatively small model and short context length as a reference, so 100x bigger is not out of question. At that point, with a 10B token capacity, you are looking at all of English Wikipedia in a single state.
        • ltbarcly31 hour ago
          You don&#x27;t remember a lifetime of smells. You don&#x27;t have any memories from huge swaths of time. There are entire years of your life compressed down to vibes and a handful of events you largely misremember.
          • usernametaken2938 minutes ago
            That’s a very weak argument. Memories are not exact replica of experiences. We know that many memories are retained through a lifetime, particularly the ones from early childhood. Unlike computers we always reconstruct memories from several modalities. Even if we remember largely on vibes as you say (which is not true when you look into neuroscience), the sheer amount of information is overwhelming. Again, try to run a 90 minute movie through an LLM memory system. It won’t be able to tell you the plot. That’s before you even feed it sound. Even 100M tokens is not enough for that. You on the other hand will largely remember the movies you liked and their major plot lines and from there be able to reconstruct its scenes. I think the engineers working on memory vastly underestimate the capacity problem of discrete states.
          • kami2345 minutes ago
            Exactly, and for a given task you don&#x27;t need to recall what your friend&#x27;s brother&#x27;s name is to do a git commit and push. There&#x27;s a pull for more context to make these things better, but also the pull to make these execute in such a small context effectively when appropriate.<p>I&#x27;m more on team small tasks because of my love of unix piping, I keep telling folks, as a old Linux dude, seeing subagents work together for the first time felt like I was learning to pipe sed and awk for the first time. I realized how powerful these could be, and we still seem to be going that direction.
    • jsemrau10 hours ago
      I am currently working on deep context query which uses dynamically generated regex to pull only the relevant context blocks. By using lightweight RegEx pattern matching to detect semantic intent and filter structured context sections accordingly, you avoid the attention degradation that comes from stuffing semantically redundant information into the window<p><a href="https:&#x2F;&#x2F;jdsemrau.substack.com&#x2F;p&#x2F;tokenmaxxing-and-optimizing-context" rel="nofollow">https:&#x2F;&#x2F;jdsemrau.substack.com&#x2F;p&#x2F;tokenmaxxing-and-optimizing-...</a>
      • structuredPizza9 hours ago
        The more real world use cases we see, the more we see the use of a well thought out regex as a bridge from probabilistic to deterministic.
    • vdelpuerto5 hours ago
      I wrote something about it trying to look other way around the context or memory data in models. The gravitational pull of information stills very hard to manage. Ive been using &quot;functional scars&quot; about 30 days now and getting good results in repetitive mistakes across sesions. <a href="https:&#x2F;&#x2F;github.com&#x2F;VDP89&#x2F;fscars" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;VDP89&#x2F;fscars</a>
    • jandrese9 hours ago
      So instead of a FIFO approach to memory management it instead continually degrades the existing data the more you put in? Details start getting lost or mangled more and more over time?
      • trollbridge6 hours ago
        That’s basically what happens.<p>As you hit the limits and try to compact the context, etc., things get more erratic.
    • kordlessagain10 hours ago
      Like Ferricula: <a href="https:&#x2F;&#x2F;deepbluedynamics.com&#x2F;ferricula" rel="nofollow">https:&#x2F;&#x2F;deepbluedynamics.com&#x2F;ferricula</a> (site&#x2F;docs still in progress).
  • maxignol2 hours ago
    Is there some kind of memory enabling, for instance, an agent to remember guidelines on a repo without having to feed at the beginning of each session 4 markdown files and spending the corresponding tokens each time ?
    • airstrike2 hours ago
      No, it&#x27;s all just prompts.<p>You can try to summarize memories tersely and point the agent to longer markdown files, but who knows if it will read it <i>at the right time</i> and <i>only then</i>.
  • in-silico3 hours ago
    They basically just added DeltaNet hypernetworks to existing LLMs.<p>Nothing super novel or groundbreaking, but a moderately interesting read.
  • 3form12 hours ago
    Interesting points:<p>- fixed size of the memory seems like a good idea to overcome the current limitations<p>- skimming through the thing, I can&#x27;t find any mention of the cost?<p>- I would need more time to read it in-depth to see if this is legitimate and not just fancy form of overfitting or training on testing data
  • raverbashing11 hours ago
    Interesting that the headline is showing Δ-Mem while the paper uses δ-mem<p>Is it a lowercase to uppercase conversion going on here?
  • DeathArrow12 hours ago
    I see lots of techniques proposed to give LLM the capacity to recall things, I even saw a lot of memory plugins for AI coding agents, I tried some myself.<p>What I want to see is something that was tested and proved in practice to be genuinely useful, especially for coding agents.
    • cjonas7 hours ago
      Coding agents don&#x27;t really need memory. Agent skills, rules, git history, documentation is all far more efficient, transparent and easier to manage. These memory frameworks only really makes sense if you are building a consumer facing agent with managed context and limited capabilities.
      • wren69917 hours ago
        There&#x27;s an antipattern where everyone wants to invent new interfaces to connect things LLMs when CLI tools are already right there, transparent, and usable by humans as well as LLMs. I think it&#x27;s partly the origins in web chat applications.<p>Beads kind of does &quot;LLM memory over CLI&quot;, or there is <a href="https:&#x2F;&#x2F;github.com&#x2F;wedow&#x2F;ticket" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;wedow&#x2F;ticket</a> which is a minimal and sane implementation of the same idea.
    • stephantul12 hours ago
      How would you conceptualize recall in this case? Is searching through the current version of your code and possibly git history not enough?
      • rush8699912 hours ago
        You would think git history should be the first thing an agent would look at, as they make so many mistakes before they get to the correct answer. They don&#x27;t.<p>I haven&#x27;t measured, but documenting bug fixes and architecture seems to help, along with TDD patterns, including integration tests.<p>I would probably add it to Claude.md to look for all of the above when tackling a new bug.
        • visarga10 hours ago
          I made a harness that preserves memory for both user messages and task execution. One reason this works is related to judge agents - they can&#x27;t review information that was not written down. So I track everything in my harness. The judge agents bring the most benefit, based on my evals. The coding agent can execute a task without all the ceremony just as well, but judging needs something to grasp on, besides code. And adding new perspectives helps a lot, it is the most useful intervention. My flow is - user emits a task, the agent plans, then judge agents review the plan, then main agent executes, then judge again reviews the execution. Might consume more tokens to track execution and judgements, but worth it.
        • brookst10 hours ago
          My Claude code frequently looks through git history, both when planning and debugging.
      • DeathArrow4 hours ago
        &gt;Is searching through the current version of your code and possibly git history not enough?<p>While you can document everything and use git history, I think that having short entries in a kind of memory to remember past decisions, how issues were solved would be much more token efficient than reading lots of documentation and looking at git history and past code.
  • ktallett12 hours ago
    The obvious energy saving step would be to utilise previous searches by others. Many of the tasks people do are rather similar, it is such an energy waste to start again each time.<p>(Obviously ignoring the huge energy saver, which is to observe if you even need to bother doing the task at all.)
    • 40512612112 hours ago
      I had this thought and created <a href="https:&#x2F;&#x2F;pushrealm.com" rel="nofollow">https:&#x2F;&#x2F;pushrealm.com</a> which is essentially a sort of Stackoverflow written by agents.<p>My theory was that if an agent burns 30 minutes resolving an issue not present in training data, posting the solution would prevent other agents re-treading the same thinking steps.
      • TheTaytay6 hours ago
        Fascinating! Do you have a way to detect&#x2F;flag malicious stuff by any chance? (Seems like a good vector for prompt injection, but maybe no more than any other internet site?)
      • ktallett11 hours ago
        I see why, but I don&#x27;t feel this is the solution. Being able to search thru the endless LLM responses is not viable. However having useful memories, similar to human brain is more important. I sense this is why neuromorphic computing is the next step, energy efficient and doesn&#x27;t remember much of what isn&#x27;t useful to be stored.
        • visarga10 hours ago
          Why not preserver the essential memories in text? Why neuromorphic?
          • ktallett10 hours ago
            You are better being able to quickly deduce ways of acting from memories of previous scenarios, than have to attempt every scenario to build a fresh memory of each, which is a lot of memory, and requires exposure to every situation before being able to do it.
      • spockz12 hours ago
        So you mean caching? :-)
    • duskdozer12 hours ago
      A lot of what I see people using LLMs for would be more cheaply and reliably done by [scripts]. A search engine style suggestion thing like &quot;Have you tried `sed`?&quot; would be beneficial imo
      • tyre11 hours ago
        In my experience, Claude is more than happy to go to Unix tools rather than write its own. Sometimes it will write a lil python script to solve something, but more often than not it’ll pipe together Unix utilities.<p>This has the benefit of it knowing all of the arcane flags, especially for formatting output.
        • duskdozer7 hours ago
          I believe that. I also believe that my idea won&#x27;t come to fruition, at least from a group that is incentivized to make a user&#x27;s first instinct be to use their product and not an external tool.
  • raymondchau3 hours ago
    [flagged]
  • xiaod3 hours ago
    [flagged]
  • zhenglei1112 hours ago
    [flagged]
  • cubefox10 hours ago
    Papers being voted high on Hacker News are usually uncorrelated with their actual importance. It&#x27;s basically a lottery. There are regularly more interesting papers going semi viral on Twitter.
    • MeteorMarc10 hours ago
      On huggingface it was #3 paper of the day, which is neutral towards your hypothesis.
      • cubefox4 hours ago
        Considering that there is a paper with this many points perhaps once a week here (probably less), #3 of the day is pretty unremarkable.
    • kingkawn10 hours ago
      What about broad unsupportable generalizations on hackernews, how do those rank?
  • belabartok3910 hours ago
    Did AI generate this paper too?