Thanks for sharing. However, this missed being a good
writeup due to lack of numbers and data.<p>I'll give a specific example in my feedback,
You said:<p>```
so far, so good, I was able to play with PyTorch and run Qwen3.6 on llama.cpp with a large context window
```<p>But there are no numbers, results or output paste.
Performance, or timings.<p>Anyone with ram can run these models, it will just be impracticably slow. The halo strix is for a descent performance, so you sharing numbers will be valuable here.<p>Do you mind sharing these? Thanks!
This is more of a “succeeding to get anywhere close to messing around” rather than “it works so now I can run some benchmarks” type of article.
To give benefit of doubt, author does state multiple times (including in the title) that these were "first impressions", so perhaps they should have mentioned something like "...In the next post, we'll explore performance and numbers" to avoid a cliffhanger situation, or do a part 1 (assuming the intention was to follow-up with a part 2).