Lost me at the first OpenClaw mention.
I am not necessarily against AI, but in this case, I also lost interest at that point. I love reading about reverse engineering, and to me the first part of the article felt like it was leading up to that. But then it ended with what to me feels like "and then I asked AI to finish the project for me, which it did". That's not a criticism by the way, there's nothing wrong with the author using AI to reach a certain goal. I just don't find that interesting personally.
I did finish the article, but for me, it was missing a discussion and review, probably manual improvement, of the code that came out of the LLM. Reverse-engineering means understanding a system that you didn't previously understand, which is still (with some degradation) possible while using an LLM.
I was not evicting expecting to see OpenClaw here either. It's out of keeping with the rest of the article...<p>At least there's acknowledgement of limitations and it's not just hype. Overall a useful data point in terms of what's possible.
Surprised me too. In the end, I guess it's a time-saving tool for a tedious task. But reduces the old-school grittiness of the adventure. Still an enjoyable read.
Why? It seems foolish to have a knee jerk reaction to someone using a tool that got them where they needed to be.
That’s a good question, and I can’t speak for the parent, but for me, I like reading about a person’s journey of discovery. There were many insights this person did not have because he turned the task over to a power tool. People can use whatever tools they want. I also can spend my attention however I like. Reading about someone using AI is just boring to me.
I suppose its a bit like winning a first person shooter game with aim assist on<p>It is not an authentic display of pure skill