6 comments

  • fph1 hour ago
    Despite the shortage, RAM is still cheaper than mathematicians.
    • captainbland1 minute ago
      I don't know, I think if you weighed up the costs of AI related datacentre spend vs. the average mathematics academic's salary you could come to a different conclusion.
    • 3yr-i-frew-up1 hour ago
      [dead]
  • abdelhousni1 hour ago
    The same could be said about other IT domain... When you see single webpages that weight by tens of MB you wonder how we came to this.
  • Lerc3 hours ago
    This is one of the basic avenues for advancement.<p>Compute, bytes of ram used, bytes in model, bytes accessed per iteration, bytes of data used for training.<p>You can trade the balance if you can find another way to do things, extreme quantisation is but one direction to try. KANs were aiming for more compute and fewer parameters. The recent optimisation project have been pushing at these various properties. Sometimes gains in one comes at the cost of another, but that needn&#x27;t always be the case.
  • tornikeo33 minutes ago
    Sigh. Don&#x27;t make me tap the sign [1]<p>[1] <a href="http:&#x2F;&#x2F;www.incompleteideas.net&#x2F;IncIdeas&#x2F;BitterLesson.html" rel="nofollow">http:&#x2F;&#x2F;www.incompleteideas.net&#x2F;IncIdeas&#x2F;BitterLesson.html</a>
  • amelius25 minutes ago
    Can we say something about the compression factor for pure knowledge of these models?
  • LoganDark2 hours ago
    We will not see memory demand decrease because this will simply allow AI companies to run more instances. They still want an infinite amount of memory at the moment, no matter how AI improves.
    • jurgenburgen2 hours ago
      If models become more efficient we will move more of the work to local devices instead of using SaaS models. We’re still in the mainframe era of LLM.
      • throwatdem1231136 minutes ago
        The hyperscalers do not want us running models at the edge and they will spend infinite amounts of circular fake money to ensure hardware remains prohibitively expensive forever.
        • topspin3 minutes ago
          &gt; they will spend infinite amounts of circular fake money &gt; forever<p>If that&#x27;s the plan (there is no plan) then it expires at some point, because it&#x27;s a spiral and such spirals always bottom out.
        • Imustaskforhelp12 minutes ago
          &gt; of circular fake money<p>Oh it gets worse than that, the money which caused all of this by OpenAI was taken from Japanese banks at cheap interest rates (by softbank for the stargate project), and the Japanese Banks are able to do it because of Japanese people&#x2F;Japanese companies and also the collateral are stocks which are inflated by the value of people who invest their hard earned money into the markets<p>So in a way they are using real hard earned money to fund all of this, they are using your money to basically attack you behind your backs.<p>I once wrote an really long comment about the shaky finances of stargate, I feel like suggesting it here: <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=47297428">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=47297428</a>
      • Ray2020 minutes ago
        &gt; If models become more efficient<p>Then we can make them even bigger.
        • Imustaskforhelp17 minutes ago
          &gt; Then we can make them even bigger.<p>But what if it becomes &quot;good enough&quot;, that for most intents and purposes, small models can be &quot;good enough&quot;<p>There are some people here&#x2F;on r&#x2F;localllama who I have seen run some small models and sometimes even run multiple of them to solve&#x2F;iterate quickly and have a larger model plug into it and fix anything remaining.<p>This would still mean that larger&#x2F;SOTA models might have some demand but I don&#x27;t think that the demand would be nearly enough that people think, I mean, we all still kind of feel like there are different models which are good for different tasks and a good recommendation is to benchmark different models for your own use cases as sometimes there are some small models who can be good within your particular domain worth having within your toolset.
      • DeathArrow1 hour ago
        I don&#x27;t think we are there yet. Models running in data centers will still be noticeably better as efficiency will allow them to build and run better models.<p>Not many people would like today models comparable to what was SOTA 2 years ago.<p>To run models locally and have results as good as the models running in data centers we need both efficiency and to hit a wall in AI improvement.<p>None of those two conditions seem to become true for the near future.
      • ssyhape54 minutes ago
        I like the mainframe comparison but isn&#x27;t there a key difference? Mainframes died because hardware got cheap -- that&#x27;s predictable. LLM efficiency improving enough to run locally needs algorithmic breakthroughs, which... aren&#x27;t. My gut says we&#x27;ll end up with a split. Stuff where latency matters (copilot, local agents) moves to edge once models actually fit on a laptop. But training and big context windows stay in the cloud because that&#x27;s where the data lives. One thing I keep going back and forth on: is MoE &quot;better math&quot; or just &quot;better engineering&quot;? Feels like that distinction matters a lot for where this all goes.
    • redrove2 hours ago
      I disagree. I think a sharp drop in memory requirements of at least an order of magnitude will cause demand to adjust accordingly.
      • cyanydeez24 minutes ago
        Department of Transportation always thinks adding more lanes will reduce traffic.<p>It doesn&#x27;t, it induces demand. Why? Because there&#x27;s always too many people with cars who will fill those lanes.
        • nkmnz15 minutes ago
          Citation needed. I&#x27;ve heard this quite often, but so far, I haven&#x27;t seen proof of the stated causality.<p>PS: This doesn&#x27;t mean that better public transportation could deliver more bang for the buck than the n-th additional car lane. But never ever have I heard from anybody that they chose to buy a car or use an existing car more often because an additional lane has been built.
      • 3yr-i-frew-up1 hour ago
        [dead]
    • jLaForest46 minutes ago
      Jevons paradox <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Jevons_paradox" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Jevons_paradox</a>