5 comments

  • brokensegue1 minute ago
    &quot;classical ML&quot; models typically have a more narrow range of applicability. in my mind the value of ollama is that you can easily download and swap-out different models with the same API. many of the models will be roughly interchangeable with tradeoffs you can compute.<p>if you&#x27;re working on a fraud problem an open-source fraud model will probably be useless (if it even could exist). and if you own the entire training to inference pipeline i&#x27;m not sure what this offers? i guess you can easily swap the backends? maybe for ensembling?
  • tl2do10 minutes ago
    Since generative AI exploded, it&#x27;s all anyone talks about. But traditional ML still covers a vast space in real-world production systems. I don&#x27;t need this tool right now, but glad to see work in this area.
  • mehdibl23 minutes ago
    Ollama is quite a bad example here. Despite popular, it&#x27;s a simple wrapper and more and more pushed by the app it wraps llama.cpp.<p>Don&#x27;t understand here the parallel.
  • Dansvidania29 minutes ago
    Can’t check it out yet, but the concept alone sounds great. Thank you for sharing.
  • jnstrdm051 hour ago
    I have been waiting for this! Nice