1 comments

  • AutoJanitor1 hour ago
    I&#x27;m an independent researcher running LLMs on an IBM POWER8 server (320GB RAM, vintage 2015 hardware).<p><pre><code> On December 16, 2025, I developed &quot;RAM Coffers&quot; - a NUMA-distributed conditional memory system that selectively houses model weights in RAM banks with resonance-based routing for O(1) knowledge retrieval. On January 12, 2026, DeepSeek published their &quot;Engram&quot; paper (arXiv:2601.07372) describing the same core concept: separating static knowledge from dynamic computation via O(1) lookup. Same idea. 27 days apart. No connection. DOI: 10.6084&#x2F;m9.figshare.31093429 GitHub: https:&#x2F;&#x2F;github.com&#x2F;Scottcjn&#x2F;ram-coffers DeepSeek paper: https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2601.07372 Running on &quot;obsolete&quot; POWER8 hardware, I hit 147 tokens&#x2F;sec on TinyLlama - 8.8x faster than stock llama.cpp. Sometimes the garage beats the lab.</code></pre>