> If OP is not a native English speaker and is using an LLM to create a reasonable prose, then it might be the best way for them to try and communicate their ideas. It's probably better than Google translate. It affects how the reader interprets the writing, though.<p>That's just crazy, do you think people don't get discriminated because of that? they'll probably get flagged and blacklisted from HN just because of sharing a post riddled with grammar mistakes, it will look like spam to many. If they get lucky, the top comments would be correcting their grammar mistakes, not about the content.<p>If you didn't talk to me before today, you don't know how I talk. You don't know what sincere is like. the term you're looking for is authentic not sincere. questioning the sincerity of the OP is just wrong. You don't like people having control over how what they have to say is conveyed to others, because you have some irrational bias against the usage of a particular tool.<p>You argue and even use AI (you don't mind being insincere? I'd like to get your own original arguments, how about that?) to dismiss content because of style, thereby justifying the need for people to be careful of the style of the post they share. Have you considered that had they not used AI, you or others would be dismissing their post for other style-related reasons? because you care about style so much.<p>But you're right, style is content, it was wrong of me to claim otherwise. What I meant was probably "meaning". The writing style affects how you read the content, in this case you don't like how it forces you to read it, but the meaning OP is trying to communicate (what I meant by "content") is being glossed over.<p>The take away for me from this discussion, is people need to use better prompts, and better models, not that they shouldn't use an LLM, because even when their grammar and spelling is wrong, they get nitpicked against this way.<p>> The effort to sound compelling ends up undercutting any sense that a real person with a real perspective is behind it.<p>That's a fault and a bias by the reader, in my opinion. I didn't even think it was LLM written, I wasn't looking for it (we tend to find what we're looking for?). My focus was on what was done, validating the claims made, and analyzing the implications. I didn't care how they sounded, because I was able to actually read the content, and understand what they were saying. If it was the other way, and I was the OP, I would want people to focus on what I was saying, and appreciate that I took some action to ensure my post is readable.<p>I think they can use better prompts to make it sound and feel better, but it's a real shame that they have to. It is this sort of an interaction that makes me wish we had more LLMs making decisions instead of humans out there. Things like accents, writing styles, even last names, and spelling mistakes decide the fate of many today. The real value people bring, the real human potential is dismissed (not in this case, just making a general observation), cosmetic and performative factors override all else.<p>> it's not correcting your writing — it's replacing it. That's a fundamentally different thing.<p>It is my writing, in that I agreed the meaning of the rewritten content is what I intended to communicate. People get to have agency on how their meaning is conveyed. You don't have any say over that. Your criticism over how it feels, although I disagree, is legitimate, but your criticism based solely on the fact that AI rewrote the content is entirely invalid.<p>Let's imagine OP had a human copy write for them, editing and rewriting the entire content, would that change anything? If not, why are we talking about LLMs instead of the specifics of what bothered you uniquely, so that people reading this thread can use better prompts to avoid those annoying pitfalls?<p>I didn't even pick up on this being AI rewritten, I'm only taking yours and others' word for it. My biggest concern these days is that kids are growing up interacting with LLMs a lot, and their original work will be dismissed by older people because it sounds like an LLM. There are many cases of students having their work and exams dismissed, even facing disciplinary actions leading up to lawsuits, where teachers/academics claimed wrongly it was LLM generated content (and why I keep feeling that perhaps LLMs should replace those biased academics and teachers if possible).<p>LLM usage isn't going away, perhaps prompts and models will improve, but more likely than not, it is more economical and practical for humans to be forced to adapt one way or the other, to regular LLM usage by other humans. If you skip in 50 year increments and read books or news stories, you'll also see how the writing style and "feel" is very different. There is a very distinctive "feel" to how people on HN write, compared to reddit, gaming discord servers, twitter, bluesky, or the comments section of some conservative site. You'll see some groups use terms like "bro" and "bruh" a lot, others end everything with "lol", others yet include emoji in everything. All this will feel very weird and inappropriate to someone from the 1800s. I am not saying all that to dismiss your observations, but to say that this stuff isn't all that important. If you didn't think the cause of the annoying writing style was an LLM, I doubt you would have commented on it, so don't comment at all about it is my suggestion. There was no egregious writing style offense that was so serious that we need to talk about it, instead of the actual work OP is sharing.