7 comments

  • NitpickLawyer3 hours ago
    Misses a few interesting early models: GPT-J (by Eleuther, using gpt2 arch) was the first-ish model runnable on consumer hardware. I actually had a thing running for a while in prod with real users on this. And GPT-NeoX was their attempt to scale to gpt3 levels. It was 20b and was maybe the first glimpse that local models might someday be usable (although local at the time was questionable, quantisation wasn't as widely used, etc).
    • pu_pe3 hours ago
      GPT-J was the one that made me really interested in LLMs, as I could run it on a 3090.<p>Some details on the timeline are not quite precise, and would benefit from linking to a source so that everyone can verify it. For example, HyperClOVA is listed as 204B parameters, but it seems it used 560B parameters (<a href="https:&#x2F;&#x2F;aclanthology.org&#x2F;2021.emnlp-main.274&#x2F;" rel="nofollow">https:&#x2F;&#x2F;aclanthology.org&#x2F;2021.emnlp-main.274&#x2F;</a>).
      • ai_bot3 hours ago
        Great idea! Thanks
    • ai_bot3 hours ago
      Great catches — just added GPT-Neo (2.7B, Mar 2021), GPT-J (6B, Jun 2021), and GPT-NeoX (20B, Apr 2022). Thanks!
  • Maro1 hour ago
    This would be interesting if each of them had a high-level picture of the NN, &quot;to scale&quot;, perhaps color coding the components somehow. OnMouseScroll it would scroll through the models, and you could see the networks become deeper, wider, colors change, almost animated. That&#x27;d be cool.
    • ai_bot43 minutes ago
      Thanks! Great idea
  • YetAnotherNick6 minutes ago
    It misses almost every milestones, and lists Llama 3.1 as milestone. T5 was much bigger milestone than almost everything in the list.
  • hmokiguess45 minutes ago
    Would be nice to see some charts and perhaps an average of the cycles with a prediction of the next one based on it
    • ai_bot44 minutes ago
      Thanks! I&#x27;ll add some charts
  • jvillasante1 hour ago
    Why is it hard in the times where AI itself can do it to add a light mode to those blacks websites!? There are people that just can&#x27;t read dark mode!
    • Lerc1 minute ago
      Visual presentation has been a weak point of AI generation for me. There isn&#x27;t a lot of support for them seeing how a potential presentation might appear to a human.<p>Models that take visual input seem more focused on identifying what is in the image compared to what a human might perceive is in an image, and most interfaces lack any form of automated feedback mechanism for them to look at what it has made.<p>In short, I have made some fun things with AI but I still end up doing CSS by hand.
    • ai_bot43 minutes ago
      Thank you! Sorry for the inconvenience. I&#x27;ll add it a bit later
  • varispeed46 minutes ago
    The models used for apps like Codex, are they designed to mimic human behaviour - as in they deliberately create errors in code that then you have to spend time debugging and fixing or it is natural flaw and that humans also do it is a coincidence?<p>This keeps bothering me, why they need several iterations to arrive at correct solution instead of doing it first time. The prompts like &quot;repeat solving it until it is correct&quot; don&#x27;t help.
  • EpicIvo6 minutes ago
    [dead]