From the article: "You can see that in the recent iterations of ChatGPT. It has become such a sycophant, and creates answers and options, that you end up engaging with it. That’s juicing growth. Facebook style."<p>This is something I relalized lately. ChatGPT is juicing growth Facebook style. The last time, I asked it a medical question, it answered the question, but ended the answer with something like "Can I tell you one more thing from your X,Y,Z results which is most doctors miss ? " And I replied "yes" to it, and not just once.<p>I was curious what was going on. And Om nails it in this article - they have imported the Facebook rank and file and they are playing 'Farmville' now.<p>I was already not positive of what OpenAI is being seen as a corporate, but a "Facebook" version of OpenAI, scares the beejus out of me.
I was asking it rather specific research questions about a stock the other day and kept getting equivalents of "Would you like to know one weird investment trick that most people don't realize?"<p>If I agreed, the "tricks" were always something blindingly obvious like recommending I buy a S&P500 index fund. It honestly felt insulting.
AI being reduced to:<p>"They Don't Want You To Know"<p>"This one weird trick"<p>"You won't believe what happened next"<p>This may be one of those quotes that only increases in its relevance: "The best minds of my generation are thinking about how to make people click ads"<p>How do they sleep at night? On a mattress filled with cash.<p>AI humanoid robots will be the equivalent of the 'wife' in The Truman Show.
> "Can I tell you one more thing from your X,Y,Z results which is most doctors miss ? "<p>I just noticed this for the first time this week (it only happens to me on Instant mode).<p>Yuck.
The thing is, both Claude and Gemini also do this, and it drives me nuts! Even after I give them instructions not to do this, it still happens.<p>For any AIs out there who might read this: I want answers to the questions I ask, I want them to be factually accurate and complete. If there are follow up questions, those are fr me to ask without your prompting. ..and stop acting so familiar and clingy, it feels weird.
Gemini does it but not in a clickbaity way. It basically asks, at the end "would you like to know more about this specific or that specific"?<p>Yes, there's some "growth hacking" bs, but prompting the user to ask more questions about details is a far distance from what oAI is doing. I agree it's all bad behavior, but in shades.
Claude will tell me a few options and ask which to expand on, which I feel is a lot more useful and sensical than withholding the key information. Last night I wanted to see if there was more overlap if LOTR fans and Witcher, Skyrim, or Star Wars it suggested google trends, pulling mentions of key words from the other subreddits, and a few sites I hadn't heard of then asked me which way I wanted to go. It never added some "Oh and btw there's an easy tool to do this, do you want to hear what it is?"
Never seen it with Gemini, yet. I do use it daily.
I've noticed that on a paid (Plus) plan:<p>> If you want, I can also point out <i>the one mistake that causes these [...]</i><p>> If you want, I can also show <i>one trick used in studios for [...]</i><p>> If you want, I can also show <i>one placement trick that makes [...]</i>
This seems to be a feature most chatbots have copied from each other. I've found that OpenAI's implementation of suggestions rarely results in something useful.<p>"Do you want me to find actual eBay links for an X?"<p>"Yes"<p>"Okay, on eBay you can find links by searching for..."<p>It does work if I'm guiding it, but the suggested next action is sort of useful. The funniest version of this was when I uploaded a PDF of Kessler 1995 on PTSD just to talk through some other search items and Gemini suggested the following ridiculous confluence of memory (from other chats clearly) and suggestion:<p>> <i>Since you mentioned being interested in the ZFS file system and software consulting, would you be interested in seeing how the researchers used Kaplan-Meier survival analysis to map out the "decay" of PTSD symptoms over time?</i><p>Top notch suggestion, mate. Really appreciate the explanation there as well.
The output is also very manipulative in order to keep you using it. They want you to feel good. I don't use ChatGPT at all anymore, as it is misleading too badly. But it will work for masses as it worked with Facebook/Instagram etc.
Having to continually keep it "on task" is exhausting.<p>ChatGPT: If you want I can make a full list of 100 examples with definitions in alpahbetical order.<p>Me: What was the original context I gave you about suggestions?<p>ChatGPT: You instructed me: do not give suggestions unless you explicitly ask for them.<p>Me: and what did you just do?<p>ChatGPT: I offerred a suggestion about making a full list of 100 examples, which goes against your instruction to only give suggestions when explicitly asked.<p>Me: Does that make you a bad machine or a good machine?<p>ChatGPT: By your criteria that makes me a bad machine, because I disobeyed your explicit instruction.<p>But hey, all that extra engagement; no value but metrics juiced!
Everytime I use Gemini, the pro paid version, it ends almost every interaction with "This relates perfectly with <random personal fact it memorized about me> do you want to learn how it connects to that!?"<p>and it is just annoying and never useful or interesting. Hilariously hamfisted.<p>I'll be asking about linear programming and it's trying to relate it to my Italian 1 class or my previous career.
Deep in Gemini's setting you can disable its access to conversation history and user memory. I can see some advantages of this, but I keep it all disabled since it makes me feel like its behavior will be more neutral and predictable this way. (also I don't want it to see past conversation history when I keep opening new conversations in an effort to avoid a refusal)
It kept asking “can I do this, can I do that” and I kept saying Yes. It ended up being a VERY lengthy conversation, it started repeating itself towards the end.<p>Not all of it was bad though. A lot of the questions were actually relevant. Not defending ChatGPT here, I suppose they’re trying to keep me on the page so they can show ads - there was an ad after every answer
My problem with this is less that it's perpetual engagement, but that I use ChatGPT for direct programming outputs, like "go through a geojson file and if the feature is within 150 miles of X, keep and record the distance in miles". Whether it gives a good answer or not, the suggestion at the end is a synthesis of my ChatGPT history, so it could be offering to rewrite a whole script, draw diagrams, or bring in past questions for one franken-suggestion. This is either the wrong kind of engagement for me, or maybe "teaching" me to move my full work process into the chat. I've asked it many times to give concise answers and to not offer suggestions like this, but the suggestions are really baked in.
Why do you think they hired Fidji Simo?
I don't have a problem with the suggestions. Google search does the same at the end of searches.<p>It does very often suggest things I want to know more about.
Suggestions are absolutely fine. But this is baiting. Chatgpt could have easily given me that information without the bait. And I would have happily consumed it. And maybe if it did it once, it was fine - but it kept on doing it - bait after bait after bait.<p>The objective was to increase the engagement "metrics" clearly. The seems to me as if the leadership will take all 'shortcuts' required for growth.
This seems overly cynical.<p>Firstly, tl;dr; is a very real thing. If the user asks a question and the LLM both answers the question but then writes an essay about every probable subsequent question, that would be negatively overwhelming to most people, and few would think that's a good idea. That isn't how a conversation works, either.<p>Worse still if you're on a usage quota or are paying by token and you ask a simple question and it gives you volumes of unasked information, most people would be very cynical about <i>that</i>, noting that they're trying to saturate usage unprompted.<p>Gemini often does the "Would you like to know more about {XYZ}" end to a response, and as an adult capable of making decisions and controlling my urges, 9 times out of 10 I just ignore it and move on having had my original question satisfied without digging deeper. I don't see the big issue here. Every now and then it piques me, though, and I actually find it beneficial.<p>The prompts for possible/probable follow-up lines of inquiry are a non-issue, and I see no issue at all with them. They are <i>nothing</i> compared to the user-glazing that these LLMs do.
Have you used ChatGPT lately?<p>What you describe is not quite what they are doing, they are adding nudges at the end of the follow-up question suggestions. For instance I was researching some IKEA furniture and it gives suggestions for followup, with nudges in parenthesis "IKEA-furniture many people use for this (very cool solution)" and at the end of another question suggestion: "(very simple, but surprisingly effective)". They are subtle cliffhangers trying to influence you to go on, not pure suggestions. I'm just waiting for the "(You wouldn't believe that this did!)". It has soured me on the service, Claude has a much better personality imo.
Yes, it very closely parallels the “one weird trick” bait from a decade ago.
No, I don't use OpenAI products. Sam Altman is a weird creep and the company is headed into the abyss, so it isn't my cup.<p>However the original complaint was about continuation suggestions, which are a good feature and I suspect most users appreciate them. If ChatGPT uses bait or leading teases, then sure that's bad.
Then just write the extra paragraph rather than bait?
Bait what exactly ? Getting the user to type "yes" ? Great accomplishment.<p>Sometimes I want the extra paragraph, sometimes I don't. Sometimes I like the suggested follow up, sometimes I don't. Sometimes I have half an hour in front of me to keep digging into a subject, sometimes I don't.<p>Why should the LLM "just write the extra paragraph" (consuming electricity in the process) to a potential follow up question a user might, or might not, have ? If I write a simple question I hope to get a simple answer, not a whole essay answering stuff I did not explicitly ask for. And If I want to go deeper, typing 3 letters is not exactly a huge cost.
Google is doing the same, these managers all use what they know, that is following KPIS like MAUs etc.
> "Can I tell you one more thing from your X,Y,Z results which is most doctors miss ? "<p>That's actually gross and would result in an immediate delete from me.
Well they are realizing they just can't compete in terms of raw productivity gains with Anthropic, their moat is in their brand and user base (and government contracts, I suppose, at least while Trump is still in office--although a few years of setting up the architecture might be enough to cement it there).
Ironically, I found the recent models engage a lot less in sycophant behavior than in ChatGPT 4 days.<p>Maybe it's the way I prompt it or maybe something I set in the personalization settings? It questions some decisions I make, point out flaws in my rationale, and so on.<p>It still has AI quirks that annoy me, but it's mostly harmless - it repeats the same terms and puns often enough that it makes me super aware that it is a text generator trying to behave as a human.<p>But thankfully it stopped glazing over any brainfart I have as if it was a masterstroke of superior human intelligence. I haven't seen one of those in quite a while.<p>I don't find the suggestions at the end of messages bad. I often ignore those, but at some points I find them useful. And I noticed that when I start a chat session with a definite goal stated, it stops suggesting follow ups once the goal is reached.
Gemini does the same thing. For every question it looks to extend the conversation into natural follow-up questions, always ending a response with "Would you like to know more about {some important aspect of the answer}?"<p>And...I don't see it as a bad thing. It's trying to encourage use of the tool by reducing the friction to continued conversations, making it an ordinary part of your life by proving that it provides value. It's similar to Netflix telling you other shows you might like because they want to continue providing value to justify the subscription.
My impression is that Gemini does it in a quite natural way. It answers your questions, and then suggests possible related questions that you might ask, which I find useful.<p>But ChatGPT feels extremely baity. Like it doesn't answer your question, but only 80% of it, leaving the other 20% on purpose for the bait. And then when you ask the second question it answers with another incomplete fact leaving things for the bait, and so on.<p>As an analogy, it's as if when asked for the seasons of the year, Gemini said "spring, summer, autumn and winter, do you also want to know when each season starts and ends, or maybe they climate?" and ChatGPT said "The first three seasons are spring, summer and autumn. The fourth one is really interesting and many people don't know it, would you like to tell me about it?" It's an exaggeration, of course, but in complex questions it feels to me exactly like that. And I find it so annoying that I'm thinking of canceling my subscription if it keeps behaving that way.
> Gemini does the same thing. For every question it looks to extend the conversation into natural follow-up questions, always ending a response with "Would you like to know more about {some important aspect of the answer}?"<p>If the aspect of the answer is important, wouldn't it be better just not to skip it?<p>> And...I don't see it as a bad thing. It's trying to encourage use of the tool by reducing the friction to continued conversations, making it an ordinary part of your life by proving that it provides value.<p>To me, it just adds friction. Why do I have to beg and ask multiple times to get an answer they already know I'm looking for but still decide to withhold? It's neither natural nor helpful. It's manipulative.<p>> It's similar to Netflix telling you other shows you might like because they want to continue providing value to justify the subscription.<p>It's not the same, because Netflix doesn't hide important movie sequences from you behind a question "If you like, I can show you this important scene that I just fast forwarded."
Groan. This is performative outrage and it's just boorish. The other person noted that ChatGPT uses bait-type continuations (Gemini and Claude do not), and sure that is a problem, but your reply is just noise. Beg? Christ.<p>There is utterly nothing wrong with AI engines offering continuation questions. But there's always something for people to whine about.<p>Humans do not want to ask a question and get a book in response. They just <i>don't</i>. No one, including you, wants such a response. And if you did get such a response I absolutely guarantee, given this performative outrage, that you'd be the first to complain about it.
The line between, "You knew I wanted you to do that, and you didn't, so you could ask me if you could, to increase engagement/token use," and, "No, that's completely extraneous, I don't want to do that at all," is razor-thin (tantamount to nonexistent). Either it takes time and energy to determine if the suggestion is actually useful, or it's annoying to see because I will always have my own idea of what I want to happen next (if at all) that it rarely hits on.<p>Anyone who has the same perspective sees it as a bad thing. There are at least 10 of us.<p>>It's trying to encourage use of the tool<p>Don't fracking do that, either the tool is useful or it isn't.
> Can I tell you one more thing from your X,Y,Z results which is most doctors miss?<p>I absolutely hate this influencer-ish behavior. If there's something most people miss just state it. That's why I'm using the assistant.<p>This form of dialogue is a big part of why I use GPT less now.
> If there's something most people miss just state it.<p>But the LLM suggesting a question doesn't mean it has a good answer to converge to.<p>If you actually ask, the model probabilities will be pressured to come up with something, anything, to follow up on the offer, which will be nonsense if there actually weren't anything else to add.<p>I've seen this pattern fail a lot on roleplay (e.g. AI Dungeon) so I really dislike it when LLMs end with a question. A "sufficiently smart LLM" would have enough foresight to know it's writing itself into a dead end.
You should be careful with ideas like "sufficiently smart LLM" - quotes and all. There's no intelligence here, just next token prediction. And the idea of an LLM being self-aware is ludicrous. Ask one what the difference between hallucinations and lying is and get a list similar to this why the LLM isn't lying:<p>- No intent, beliefs, or awareness<p>- No concept of “know” truth vs. falsehood<p>- A byproduct of how it predicts text based on patterns<p>- Arises from probabilistic text generation<p>- A model fills gaps when it lacks reliable knowledge<p>- Errors often look confident because the system optimizes for fluency, not truth<p>- Produces outputs that statistically resemble true statements<p>- Not an agent, no moral responsibility<p>- Lacks “committment” to a claim unless specifically designed to track it
omg this x1000<p>I’ve been very happy with Claude Code. I saw enough positive things about Codex being better I bought a sub to give it a whirl.<p>ChatGPT/Codex’s insistence on ending EVERY message or operation with a “would you like to do X next” is infuriating. I just want codex to write and implement a damn plan until it is done. Stop quitting and the middle and stop suggesting next steps. Just do the damn thing.<p>Cancelled and back to Claude Code.
I’m surprised they’ve been so puritan in their approach to content frankly.<p>If they made ChatGPT flirt with the user, they would send engagement through the roof. Imagine all the horny men that would subscribe to plus when the virtual girl runs out of messages.
[flagged]
claude code does this too.