4 comments

  • puppion39 minutes ago
    Really nice introduction. Two things stood out to me that I think set this apart from the dozens of &quot;intro to PyTorch&quot; posts out there:<p>1. The histogram visualization of the different tensor initialization functions is a great idea. I&#x27;ve seen so many beginners confused about rand vs randn vs empty, and seeing the distributions side by side makes the differences immediately obvious. More tutorials should lead with &quot;the best way to understand is to see it.&quot;<p>2. I appreciate that the article is honest about its own results. A lot of intro tutorials quietly pick a dataset where their simple model gets impressive numbers. Here the model gets 18.6% MAPE and only 37% of predictions within 10% — and instead of hand-waving, the author correctly diagnoses the issue: the features don&#x27;t capture location granularity, and no amount of architecture tuning will fix missing information. That&#x27;s arguably the most important ML lesson in the whole piece, and it&#x27;s buried at the end almost as an afterthought. &quot;Great models can&#x27;t compensate for missing information&quot; is something I wish more practitioners internalized early.<p>The suggestion to reach for XGBoost&#x2F;LightGBM for tabular data is also good advice that too many deep learning tutorials omit. Would love to see a follow-up comparing the two approaches on this same dataset.
  • tl2do25 minutes ago
    The PyTorch3D section was genuinely useful for me. I&#x27;ve been doing 2D ML work for a while but hadn&#x27;t explored 3D deep learning — didn&#x27;t even know PyTorch3D existed until this tutorial.<p>What worked well was the progressive complexity. Starting with basic mesh rendering before jumping into differentiable rendering made the concepts click. The voxel-to-mesh conversion examples were particularly clear.<p>If anything, I&#x27;d love to see a follow-up covering point cloud handling, since that seems to be a major use case based on the docs I&#x27;m now digging through.<p>Thanks for writing this — triggered a weekend deep-dive I probably wouldn&#x27;t have started otherwise.
  • simonw52 minutes ago
    Two more recent articles by this author:<p><a href="https:&#x2F;&#x2F;0byte.io&#x2F;articles&#x2F;neuron.html" rel="nofollow">https:&#x2F;&#x2F;0byte.io&#x2F;articles&#x2F;neuron.html</a><p><a href="https:&#x2F;&#x2F;0byte.io&#x2F;articles&#x2F;helloml.html" rel="nofollow">https:&#x2F;&#x2F;0byte.io&#x2F;articles&#x2F;helloml.html</a><p>He also publishes to YouTube where he has clear explanations and high production values that deserve more views.<p><a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=dES5Cen0q-Y" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=dES5Cen0q-Y</a> (part 2 <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=-HhE-8JChHA" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=-HhE-8JChHA</a>) is the video to accompany <a href="https:&#x2F;&#x2F;0byte.io&#x2F;articles&#x2F;helloml.html" rel="nofollow">https:&#x2F;&#x2F;0byte.io&#x2F;articles&#x2F;helloml.html</a>
  • SilentM6810 minutes ago
    Cool tutorial :) Any PDF versions?