> on Apple Silicon, a WebAssembly module's linear memory can be shared directly with the GPU: no copies, no serialization, no intermediate buffers<p>enhance<p>> no copies, no serialization, no intermediate buffers<p>would it kill people to write their own stuff why are we doing this. out of all the things people immediately cede to AI they cede their human ability to communicate and convey/share ideas. this timeline is bonkers.
I’ve become overly sensitive to it as well because it’s such a reliable indicator that there are other problems in the work.<p>I’ve wasted so much time looking at interesting repos this year before discovering that one of the main claims was a hallucination, or that when I got to the specific part of the codebase it just had a big note from the LLM that’s it’s a placeholder until it can figure out how to do the requested thing.<p>The people who have AI write their articles don’t care if it works or if it’s correct. They’re trying to get jobs and want something quick and interesting that will appeal to a lazy hiring manager. We’re just taking the bait too.
This sort of obvious pattern is an instant AI dead give-away that I keep on seeing in hundreds of blogs and code posted on this site:<p><pre><code> "Here is X - it makes Y"
"That's not X, it's Y."
"...no this, no that, no X, no Y."
</code></pre>
Another way of telling via code is by deducing the experience of the author if they became an expert of a different language since...yesterday.<p>There will be a time where it will be problematic for those who over-rely on AI and will struggle on on-site interviews with whiteboard tests.