4 comments

  • snovv_crash4 hours ago
    Curious how this would deal with things like Kahan Summation, which corrects floating point errors that theoretically wouldn't exist if you had infinite precision representations.
  • strujillo48 minutes ago
    Sparse workloads are a really good fit for scientific discovery pipelines, especially when you&#x27;re searching over candidate equation spaces.<p>In practice, even relatively small systems can surface meaningful structure. I’ve been using sparse regression (SINDy-style) on raw solar wind data and was able to recover things like the Sun’s rotation period (~25.1 days estimate) and non-trivial scaling laws.<p>What becomes limiting pretty quickly is compute efficiency when you scale candidate spaces, so compiler-level optimizations like this feel directly relevant to making these approaches practical at larger scales.
  • owlbite5 hours ago
    It will be interesting to see if this solves any issues that aren&#x27;t already addressed by the likes of matlab &#x2F; SciPy &#x2F; Julia. Reading the paper it sounds a lot like &quot;SciPy but with MLIR&quot;?
    • geremiiah5 hours ago
      It&#x27;s more like OpenXLA or the PyTorch compiler, that codegens Kokkos C++ kernels from MLIR defined input programs, which for example can be outputted from PyTorch. Kokkos is common in scientific computing workloads, so outputting readable kernels is a feature in itself. Beyond that there&#x27;s a lot of engineering that can go into such a compiler to specifically optimize sparse workloads.<p>What I am missing is a comparison with JAX&#x2F;OpenXLA and PyTorch with torch.compile().<p>Also instead of rebuilding a whole compiler framework they could have contributed to Torch Inductor or OpenXLA, unless they had some design decisions that were incompatible. But it&#x27;s quite common for academic projects to try to reinvent the wheel. It&#x27;s also not necessarily a bad thing. It&#x27;s a pedagogical exercise.
      • convolvatron1 hour ago
        I think the exactly opposite, if someone was able to build a framework that doesn&#x27;t overly constrain the problem, and doesn&#x27;t require weeks of screwing around with the build, integration of half baked components and insane amounts of boilerplate, that would be a fantastic contribution in and of itself even it didn&#x27;t advance the state of tensor compilation in any other way.
  • trevyn4 hours ago
    Isn&#x27;t this where Mojo is going?
    • uoaei3 hours ago
      Speaking of, where is Mojo?