6 comments

  • <i>Here&#x27;s the part nobody talks about</i><p>This feels like such an obvious LLM tell; it has that sort of breathless TED Talk vibe that was so big in the late oughts.
  • Darkskiez1 day ago
    Yay, we&#x27;ve found another way to give LLMs biases.<p>I can see how obscure but useful nuggets of information that you rarely need, but are critical when you do, will be lost too.<p>If the weighting was shared between users, an attacker could use this feedback loop to promote their product or ideology, by executing fake interactions that look successful.
  • kburman1 day ago
    This is a recipe for model collapse&#x2F;poisoning.
  • sbinnee1 day ago
    AI slop indeed. But it caught my eyes nonetheless because I have been doing some work around the same concept. These days I found that GitHub is just flooded with &quot;AI Agent Memory&quot; modules, where in fact they are all skills-based (meaning all text instruction) solutions.
  • stephantul1 day ago
    Stop the slop!
  • littlestymaar1 day ago
    Why do people submit AI slop like that in here?
    • never_inline1 day ago
      There&#x27;s certainly an overlap between the crowd working on AI (since it&#x27;s the current hype) and the crowd finding this sort of hyped up advertisement speak impressive.<p>It&#x27;s pretty obvious that embeddings are not enough (and you need hybrid search or similar) for people who understand how they work. And yet there had been a flood of chat UIs in 2024 which are basically embed + cosine similarity, all having 10k+ stars. Now it seems to have slowed down.<p>All is because the taste for careermaxxing outweighs the taste for rigor in our industry. It&#x27;s very sad.