7 comments
This is great to see.<p>I did trained some research models using the existing PyTorch/XLA on TPUs, and it was a mess of undocumented behavior and bugs (silently hanging after 8 hours of training!).<p>If anyone is trying to use PyTorch on TPU before TorchTPU is released, you can check out the training pipeline that I ended up building to support my research: <a href="https://github.com/aklein4/easy-torch-tpu" rel="nofollow">https://github.com/aklein4/easy-torch-tpu</a>
Sounds good, but my main question is: is this a fork, or a new backend they're building in (like MPS)?
Now all that’s missing is an actual chip that can be purchased. Any ideas?
Very excited for this.
[dead]
[dead]