38 comments

  • 4corners4sides15 minutes ago
    This is one of those “don’t be evil” like articles that companies remove when the going gets tough but I guess we should be thankful that things are looking rosy enough for Anthropic at the moment that they would release a blog like this.<p>The point about filtering signal vs. noise in search engines can’t really be stated enough. At this point using a search engine and the conventional internet in general is an exercise in frustration. It’s simply a user hostile place – infinite cookie banners for sites that shouldn’t collect data at all, auto play advertisements, engagement farming, sites generated by AI to shill and produce a word count. You could argue that AI exacerbates this situation but you also have to agree that it is much more pleasant to ask perplexity, ChatGPT or Claude a question than to put yourself through the torture of conventional search. Introducing ads into this would completely deprive the user of a way of navigating the web in a way that actually respects their dignity.<p>I also agree in the sense that the current crop of AIs do feel like a space to think as opposed to a place where I am being manipulated, controlled or treated like some sheep in flock to be sheared for cash.
  • waldopat2 hours ago
    I feel like they are picking a lane. ChatGPT is great for chatbots and the like, but, as was discussed in a prior thread, chatbots aren&#x27;t the end-all-be-all of AI or LLMs. Claude Code is the workhorse for me and most folks I know for AI assisted development and business automation type tasks. Meanwhile, most folks I know who use ChatGPT are really replacing Google Search. This is where folks are trying to create llm.txt files to become more discoverable by ChatGPT specifically.<p>You can see the very different response by OpenAI: <a href="https:&#x2F;&#x2F;openai.com&#x2F;index&#x2F;our-approach-to-advertising-and-expanding-access&#x2F;" rel="nofollow">https:&#x2F;&#x2F;openai.com&#x2F;index&#x2F;our-approach-to-advertising-and-exp...</a>. ChatGPT is saying they will mark ads as ads and keep answers &quot;independent,&quot; but that is not measurable. So we&#x27;ll see.<p>For Anthropic to be proactive in saying they will not pursue ad based revenue I think is not just &quot;one of the good guys&quot; but that they may be stabilizing on a business model of both seat and usage based subscriptions.<p>Either way, both companies are hemorrhaging money.
    • guidoism1 hour ago
      &gt; ChatGPT is saying they will mark ads as ads and keep answers &quot;independent,&quot; but that is not measurable. So we&#x27;ll see.<p>Yeah I remember when Google used to be like this. Then today I tried to go to 39dollarglasses.com and accidentally went to the top search result which was actually an ad for some <i>other</i> company. Arrrg.
      • panarky8 minutes ago
        Before Google, web search was a toxic stew of conflicts of interest. It was impossible to tell if search results were paid ads or the best possible results for your query.<p>Google changed all that, and put a clear wall between organic results and ads. They consciously structured the company like a newspaper, to prevent the information side from being polluted and distorted by the money-making side.<p>Here&#x27;s a snip from their IPO letter [0]:<p><i>Google users trust our systems to help them with important decisions: medical, financial and many others. Our search results are the best we know how to produce. They are unbiased and objective, and we do not accept payment for them or for inclusion or more frequent updating. We also display advertising, which we work hard to make relevant, and we label it clearly. This is similar to a well-run newspaper, where the advertisements are clear and the articles are not influenced by the advertisers’ payments. We believe it is important for everyone to have access to the best information and research, not only to the information people pay for you to see.</i><p>Anthropic&#x27;s statement reads the same way, and it&#x27;s refreshing to see long-term values like trust expressed like this.<p>[0] <a href="https:&#x2F;&#x2F;abc.xyz&#x2F;investor&#x2F;founders-letters&#x2F;ipo-letter&#x2F;default.aspx" rel="nofollow">https:&#x2F;&#x2F;abc.xyz&#x2F;investor&#x2F;founders-letters&#x2F;ipo-letter&#x2F;default...</a>
    • Gud29 minutes ago
      Disagree.<p>I end up using ChatGPT for general coding tasks because of the limited session&#x2F;weekly limit Claude pro offers, and it works surprisingly well.<p>The best is IMO to use them both. They complement each other.
    • johnsimer1 hour ago
      Both companies are making bank on inference
      • waldopat1 hour ago
        You may not like this sources, but both the tomato throwers to the green visor crowds agree they are losing money. How and when they make up the difference is up to speculation<p><a href="https:&#x2F;&#x2F;www.wheresyoured.at&#x2F;why-everybody-is-losing-money-on-ai&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.wheresyoured.at&#x2F;why-everybody-is-losing-money-on...</a> <a href="https:&#x2F;&#x2F;www.economist.com&#x2F;business&#x2F;2025&#x2F;12&#x2F;29&#x2F;openai-faces-a-make-or-break-year-in-2026" rel="nofollow">https:&#x2F;&#x2F;www.economist.com&#x2F;business&#x2F;2025&#x2F;12&#x2F;29&#x2F;openai-faces-a...</a> <a href="https:&#x2F;&#x2F;finance.yahoo.com&#x2F;news&#x2F;openais-own-forecast-predicts-14-150445813.html" rel="nofollow">https:&#x2F;&#x2F;finance.yahoo.com&#x2F;news&#x2F;openais-own-forecast-predicts...</a>
      • exitb1 hour ago
        Maybe on the API, but I highly doubt that the coding agent subscription plans are profitable at the moment.
        • tvink1 hour ago
          For sure not
      • ehsanu11 hour ago
        Could you substantiate that? That take into account training and staffing costs?
        • ihsw1 hour ago
          The parent specifically said inference, which does not include training and staffing costs.
      • lysace1 hour ago
        That is the big question. Got reliable data on that?<p>(My gut feeling tells me Claude Code is currently underpriced with regards to inference costs. But that&#x27;s just a gut feeling...)
        • tvink57 minutes ago
          <a href="https:&#x2F;&#x2F;www.wheresyoured.at&#x2F;costs&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.wheresyoured.at&#x2F;costs&#x2F;</a><p>Their AWS spend being higher than their revenue might hint at the same.<p>Nobody has reliable data, I think it&#x27;s fair to assume that even Anthropic is doing voodoo math to sleep at night.
        • simianwords1 hour ago
          &gt; If we subtract the cost of compute from revenue to calculate the gross margin (on an accounting basis),2 it seems to be about 50% — lower than the norm for software companies (where 60-80% is typical) but still higher than many industries.<p><a href="https:&#x2F;&#x2F;epoch.ai&#x2F;gradient-updates&#x2F;can-ai-companies-become-profitable" rel="nofollow">https:&#x2F;&#x2F;epoch.ai&#x2F;gradient-updates&#x2F;can-ai-companies-become-pr...</a>
          • lysace1 hour ago
            The context of that quote is OpenAI as a whole.
  • JohnnyMarcone5 hours ago
    I really hope Anthropic turns out to be one of the &#x27;good guys&#x27;, or at least a net positive.<p>It appears they trend in the right direction:<p>- Have not kissed the Ring.<p>- Oppose blocking AI regulation that other&#x27;s support (e.g. They do not support banning state AI laws [2]).<p>- Committing to no ads.<p>- Willing to risk defense department contract over objections to use for lethal operations [1]<p>The things that are concerning: - Palantir partnership (I&#x27;m unclear about what this actually is) [3]<p>- Have shifted stances as competition increased (e.g. seeking authoritarian investors [4])<p>It inevitable that they will have to compromise on values as competition increases and I struggle parsing the difference marketing and actually caring about values. If an organization cares about values, it&#x27;s suboptimal not to highlight that at every point via marketing. The commitment to no ads is obviously good PR but if it comes from a place of values, it&#x27;s a win-win.<p>I&#x27;m curious, how do others here think about Anthropic?<p>[1]<a href="https:&#x2F;&#x2F;archive.is&#x2F;Pm2QS" rel="nofollow">https:&#x2F;&#x2F;archive.is&#x2F;Pm2QS</a><p>[2]<a href="https:&#x2F;&#x2F;www.nytimes.com&#x2F;2025&#x2F;06&#x2F;05&#x2F;opinion&#x2F;anthropic-ceo-regulate-transparency.html?unlocked_article_code=1.JlA.6SV6.hqcvsT7z64p9&amp;smid=url-share" rel="nofollow">https:&#x2F;&#x2F;www.nytimes.com&#x2F;2025&#x2F;06&#x2F;05&#x2F;opinion&#x2F;anthropic-ceo-reg...</a><p>[3]<a href="https:&#x2F;&#x2F;investors.palantir.com&#x2F;news-details&#x2F;2024&#x2F;Anthropic-and-Palantir-Partner-to-Bring-Claude-AI-Models-to-AWS-for-U.S.-Government-Intelligence-and-Defense-Operations&#x2F;" rel="nofollow">https:&#x2F;&#x2F;investors.palantir.com&#x2F;news-details&#x2F;2024&#x2F;Anthropic-a...</a><p>[4]<a href="https:&#x2F;&#x2F;archive.is&#x2F;4NGBE" rel="nofollow">https:&#x2F;&#x2F;archive.is&#x2F;4NGBE</a>
    • mrdependable3 hours ago
      Being the &#x27;good guy&#x27; is just marketing. It&#x27;s like a unique selling point for them. Even their name alludes to it. They will only keep it up as long as it benefits them. Just look at the comments from their CEO about taking Saudi money.<p>Not that I&#x27;ve got some sort of hate for Anthropic. Claude has been my tool of choice for a while, but I trust them about as much as I trust OpenAI.
      • JohnnyMarcone2 hours ago
        How do you parse the difference between marketing and having values? I have difficulty with that and I would love to understand how people can be confident one way or the other. In many instances, the marketing becomes so disconnected from actions that it&#x27;s obvious. That hasn&#x27;t happen with Anthropic for me.
        • mrdependable1 hour ago
          I am a fairly cynical person. Anthropic could have made this statement at any time, but they chose to do it when OpenAI says they are going to start showing ads, so view it in that context. They are saying this to try to get people angry about ads to drop OpenAI and move to Anthropic. For them, not having ads supports their current objective.<p>When you accept the amount of investments that these companies have, you don&#x27;t get to guide your company based on principles. Can you imagine someone in a boardroom saying, &quot;Everyone, we can&#x27;t do this. Sure it will make us a ton of money, but it&#x27;s wrong!&quot; Don&#x27;t forget, OpenAI had a lot of public goodwill in the beginning as well. Whatever principles Dario Amodei has as an individual, I&#x27;m sure he can show us with his personal fortune.<p>Parsing it is all about intention. If someone drops coffee on your computer, should you be angry? It depends on if they did it on purpose, or it was an accident. When a company posts a statement that ads are incongruous to their mission, what is their intention behind the message?
        • advisedwang1 hour ago
          Companies, not begin sentient, don&#x27;t have values, only their leaders&#x2F;employees do. The question then becomes &quot;when are the humans free to implement their values in their work, and when aren&#x27;t they&quot;. You need to inspecting ownership structure, size, corporate charter and so on, and realize that it varies with time and situation.<p>Anthropic being a PBC probably helps.
          • hungryhobbit29 minutes ago
            &gt;Companies, not begin sentient, don&#x27;t have values, only their leaders&#x2F;employees do<p>Isn&#x27;t that a distinction without a difference? Every real world company has employees, and those people <i>do</i> have values (well, except the psychopaths).
        • haritha-j1 hour ago
          I believe in &quot;too big to have values&quot;. No company that has grown beyond a certain size has ever had true values. Only shareholder wealth maximisation goals.
        • Computer051 minutes ago
          People have values, Corporations do not.
        • bigyabai47 minutes ago
          No company has values. Anthropic&#x27;s resistance to the administration is only as strong as their incentive to resist, and that incentive is money. Their execs love the &quot;Twitter vs Facebook&quot; comparison that makes Sam Altman look so evil and gives them a relative halo effect. To an extent, Sam Altman revels in the evil persona that makes him appear like the <i>Darth Vader</i> of some amorphous emergent technology. Both are very profitable optics to their respective audiences.<p>If you lend any amount of real-world credence to the value of marketing, you&#x27;re already giving the ad what it wants. This is (partially) why so many businesses pivoted to viral marketing and Twitter&#x2F;X outreach that <i>feels</i> genuine, but requires only basic rhetorical comprehension to appease your audience. &quot;Here at WhatsApp, we care deeply about human rights!&quot; *audience loudly cheers*
      • libraryofbabel2 hours ago
        I mean, yes <i>and</i>. Companies may do things for broadly marketing reasons, but that can have positive consequences for users and companies <i>can</i> make committed decisions that don&#x27;t just optimize for short term benefits like revenue or share price. For example, Apple&#x27;s commitment to user privacy is &quot;just marketing&quot; in a sense, but it does benefit users and they do sacrifice sources of revenue for it and even get into conflicts with governments over the issue.<p>And company execs <i>can</i> hold strong principles and act to push companies in a certain direction because of them, although they are always acting within a set of constraints and conflicting incentives in the corporate environment and maybe not able to impose their direction as far as they would like. Anthropic&#x27;s CEO in particular seems unusually thoughtful and principled by the standards of tech companies, although of course as you say even he may be pushed to take money from unsavory sources.<p>Basically it&#x27;s complicated. &#x27;Good guys&#x27; and &#x27;bad guys&#x27; are for Marvel movies. We live in a messy world and nobody is pure and independent once they are enmeshed within a corporate structure (or really, <i>any</i> strong social structure). I think we all know this, I&#x27;m not saying you don&#x27;t! But it&#x27;s useful to spell it out.<p>And I agree with you that we shouldn&#x27;t really <i>trust</i> any corporations. Incentives shift. Leadership changes. Companies get acquired. Look out for yourself and try not to tie yourself too closely to anyone&#x27;s product or ecosystem if it&#x27;s not open source.
        • bigyabai47 minutes ago
          &gt; and even get into conflicts with governments over the issue.<p>To be fair, they also cooperate with the US government for immoral dragnet surveillance[0], and regularly assent to censorship (VPN bans, removed emojis, etc.) abroad. It&#x27;s in both Apple and most governments&#x27; best interests to <i>appear</i> like mortal enemies, but cooperate for financial and domestic security purposes. Which for all intents and purposes, it seems they do. Two weeks after the <i>San Bernardino</i> kerfuffle, the iPhone in question was cracked and both parties got to walk away conveniently vindicated of suspicion. I don&#x27;t think this is a moral failing of anyone, it&#x27;s just the obvious incentives of Apple&#x27;s relationship with their domestic fed. Nobody holds Apple&#x27;s morality accountable, and I bet they&#x27;re quite grateful for that.<p>[0] <a href="https:&#x2F;&#x2F;arstechnica.com&#x2F;tech-policy&#x2F;2023&#x2F;12&#x2F;apple-admits-to-secretly-giving-governments-push-notification-data&#x2F;" rel="nofollow">https:&#x2F;&#x2F;arstechnica.com&#x2F;tech-policy&#x2F;2023&#x2F;12&#x2F;apple-admits-to-...</a>
      • yoyohello132 hours ago
        At the end of the day, the choices in companies we interact with is pretty limited. I much prefer to interact with a company that at least pays lip service to being &#x27;good&#x27; as opposed to a company that is actively just plain evil and ok with it.<p>That&#x27;s the main reason I stick with iOS. At least Apple talks about caring about privacy. Google&#x2F;Android doesn&#x27;t even bother to talk about it.
    • Jayakumark3 hours ago
      They are the most anti-opensource AI Weights company on the planet, they don&#x27;t want to do it and don&#x27;t want anyone else to do it. They just hide behind safety and alignment blanket saying no models are safe outside of theirs, they wont even release their decommissioned models. Its just money play - Companies don&#x27;t have ethics , the policies change based on money and who runs it - look at google - their mantra once was Don&#x27;t be Evil.<p><a href="https:&#x2F;&#x2F;www.anthropic.com&#x2F;news&#x2F;anthropic-s-recommendations-ostp-u-s-ai-action-plan" rel="nofollow">https:&#x2F;&#x2F;www.anthropic.com&#x2F;news&#x2F;anthropic-s-recommendations-o...</a><p>Also codex cli, Gemini cli is open source - Claude code will never be - it’s their moat even though 100% written by ai as the creator says it never will be . Their model is you can use ours be it model or Claude code but don’t ever try to replicate it.
      • skerit27 minutes ago
        They don&#x27;t even want people using OpenCode with their Max subscriptions (which OpenAI does allow, kind of)
      • Epitaque3 hours ago
        For the sake of me seeing if people like you understand the other side, can you try steelmanning the argument that open weight AI can allow bad actors to cause a lot of harm?
        • thenewnewguy2 hours ago
          I would not consider myself an expert on LLMs, at least not compared to the people who actually create them at companies like Anthropic, but I can have a go at a steelman:<p>LLMs allow hostile actors to do wide-scale damage to society by significantly decreasing the marginal cost and increasing the ease of spreading misinformation, propaganda, and other fake content. While this was already possible before, it required creating large troll farms of real people, semi-specialized skills like photoshop, etc. I personally don&#x27;t believe that AGI&#x2F;ASI is possible through LLMs, but if you do that would magnify the potential damage tenfold.<p>Closed-weight LLMs can be controlled to prevent or at least reduce the harmful actions they are used for. Even if you don&#x27;t trust Anthropic to do this alone, they are a large company beholden to the law and the government can audit their performance. A criminal or hostile nation state downloading an open weight LLM is not going to care about the law.<p>This would not be a particularly novel idea - a similar reality is already true of other products and services that can be used to do widespread harm. Google &quot;Invention Secrecy Act&quot;.
        • 10xDev2 hours ago
          &quot;please do all the work to argue my position so I don&#x27;t have to&quot;.
          • Epitaque2 hours ago
            I wouldn&#x27;t mind doing my best steelman of the open source AI if he responds (seriously, id try).<p>Also, your comment is a bit presumptuous. I think society has been way too accepting of relying on services behind an online API, and it usually does not benefit the consumer.<p>I just think it&#x27;s really dumb that people argue passionately about open weight LLMs without even mentioning the risks.
            • Jayakumark1 hour ago
              Since you asked for it, here is my steelman argument : Everything can cause harm - it depends on who is holding it , how determined are they , how easy is it and what are the consequences. Open source will make this super easy and cheap. 1. We are already seeing AI Slop everywhere Social media Content, Fake Impersonation - if the revenue from whats made is larger than cost of making it , this is bound to happen, Open models can be run locally with no control, mostly it can be fine tuned to cause damage - where as closed source is hard as vendors might block it. 2. Less skilled person can exploit or create harmful code - who otherwise could not have. 3. Remove Guards from a open model and jailbreak, which can&#x27;t be observed anymore (like a unknown zero day attack) since it may be running private. 4. Almost anything digital can be Faked&#x2F;Manipulated from Original&#x2F;Overwhelmed with false narratives so they can rank better over real in search.
    • Zambyte2 hours ago
      They are the only AI company more closed than OpenAI, which is quite a feat. Any &quot;commitment&quot; they make should only be interpreted as marketing until they rectify this. The only &quot;good guys&quot; in AI are the ones developing inference engines that let you run models on your own hardware. Any individual model has some problems, but by making models fungible and fully under the users control (access to weights) it becomes a possible positive force for the user.
    • throwaw122 hours ago
      I am on the opposite side of what you are thinking.<p>- Blocking access to others (cursor, openai, opencode)<p>- Asking to regulate hardware chips more, so that they don&#x27;t get good competition from Chinese labs<p>- partnerships with palantir, DoD as if it wasn&#x27;t obvious how these organizations use technology and for what purposes.<p>at this scale, I don&#x27;t think there are good companies. My hope is on open models, and only labs doing good in that front are Chinese labs.
      • mym19902 hours ago
        The problem is that &quot;good&quot; companies cannot succeed in a landscape filled with morally bad ones, when you are in a time of low morality being rewarded. Competing in a rigged market by trying to be 100% morally and ethically right ends up in not competing at all. So companies have to pick and choose the hills they fight on. If you take a look at how people are voting with their dollars by paying for these tools...being a &quot;good&quot; company doesn&#x27;t seem to factor much into it on aggregate.
        • throwaw122 hours ago
          exactly. you cant compete morally when cheating, doing illegal things and supporting bad guys are norm. Hence, I hope open models will win in the long term.<p>Similar to Oracle vs Postgres, or some closed source obscure caching vs Redis. One day I hope we will have very good SOTA open models where closed models compete to catch up (not saying Oracle is playing a catch up with Pg).
      • esbranson2 hours ago
        &gt; Blocking access<p>&gt; Asking to regulate hardware chips more<p>&gt; partnerships with [the military-industrial complex]<p>&gt; only labs doing good in that front are Chinese labs<p>That last one is a doozy.
      • derac2 hours ago
        I agree, they seem to be following the Apple playbook. Make a closed off platform and present yourself as morally superior.
    • falloutx1 hour ago
      &gt;I really hope Anthropic turns out to be one of the &#x27;good guys&#x27;, or at least a net positive.<p>There are no good guys, Anthropic is one of the worst of the AI companies. Their CEO is continuously threatening all of the white collar workers, they have engineering playing the 100x engineer game on Xitter. They work with Palantir and support ICE. If anything, chinese companies are ethically better at this point.
    • skybrian5 hours ago
      When powerful people, companies, and other organizations like governments do a whole lot of very good and very bad things, figuring out whether this rounds to “more good than bad” or “more bad than good” is kind of a fraught question. I think Anthropic is still in the “more good than bad” range, but it doesn’t make sense to think about it along the lines of heros versus villains. They’ve done things that I put in the “seems bad” column, and will likely do more. Also more good things, too.<p>They’re moving towards becoming load-bearing infrastructure and then answering specific questions about what you should do about it become rather situational.
    • adriand3 hours ago
      &gt; I&#x27;m curious, how do others here think about Anthropic?<p>I’m very pleased they exist and have this mindset and are also so good at what they do. I have a Max subscription - my most expensive subscription by a wide margin - and don’t resent the price at all. I am earnestly and perhaps naively hoping they can avoid enshittification. A business model where I am not the product gives me hope.
    • threetonesun3 hours ago
      Given that LLMs essentially stole business models from public (and not!) works the ideal state is they all die in favor of something we can run locally.
      • mirekrusin2 hours ago
        Anthropic settled with authors of stolen work for $1.5b, this case is closed, isn&#x27;t it?
    • cedws3 hours ago
      Their move of disallowing alternative clients to use a Claude Code subscription pissed me off immensely. I triggered a discussion about it yesterday[0]. It’s the opposite of the openness that led software to where it is today. I’m usually not so bothered about such things, but this is existential for us engineers. We need to scrutinise this behaviour from AI companies extra hard or we’re going to experience unprecedented enshittification. Imagine a world where you’ve lost your software freedoms and have no ability to fight back because Anthropic’s customers are pumping out 20x as many features as you.<p>[0]: <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=46873708">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=46873708</a>
      • 2001zhaozhao40 minutes ago
        Anthropic&#x27;s move of disallowing opencode is quite offputting to me because there really isn&#x27;t a way to interpret it as anything other than a walled-garden move that abuses their market position to deliberately lock in users.<p>Opencode ought to have similar usage patterns to Claude Code, being a very similar software (if anything Opencode would use fewer tokens as it doesn&#x27;t have some fancy features from Claude Code like plan files and background agents). Any subscription usage pattern &quot;abuses&quot; that you can do with Opencode can also be done by running Claude Code automatically from the CLI. Therefore restricting Opencode wouldn&#x27;t really save Anthropic money as it would just move problem users from automatically calling Opencode to automatically calling CC. The move seems to purely be one to restrict subscribers from using competing tools and enforce a vertically-integrated ecosystem.<p>In fact, their competitor OpenAI has already realized that Opencode is not really dissimilar from other coding agents, which is why they are comfortable officially supporting Opencode with their subscription in the first place. Since Codex is already open-source and people can hack it however they want, there&#x27;s no real downside for OpenAI to support other coding agents (other than lock-in). The users enter through a different platform, use the service reasonably (spending a similar amount of tokens as they would with Codex), and OpenAI makes profit from these users as well as PR brownie points for supporting an open ecosystem.<p>In my mind being in control of the tools I use is a big feature when choosing an AI subscription and ecosystem to invest into. By restricting Opencode, Anthropic has managed to turn me off from their product offerings significantly, and they&#x27;ve managed to do so even though I was not even using Opencode. I don&#x27;t care about losing access to a tool I&#x27;m not using, but I do care about what Anthropic signals with this move. Even if it isn&#x27;t the intention to lock us in and then enshittify the product later, they are certainly acting like it.<p>The thing is, I am usually a vote-with-my-wallet person who would support Anthropic for its values even if they fall behind significantly compared to competitors. Now, unless they reverse course on banning open-source AI tools, I will probably revert to simply choosing whichever AI company is ahead at any given point.<p>I don&#x27;t know whether Anthropic knows that they are pissing off their most loyal fanbase of conscientious consumers <i>a lot</i> with these moves. Sure, we care about AI ethics and safety, but we also care about being treated well as consumers.
    • drawfloat3 hours ago
      They work with the US military.
      • mhb2 hours ago
        Defending the US. So?
        • drawfloat1 hour ago
          What year do you think it is? The US is actively aggressive in multiple areas of the world. As a non US citizen I don’t think helping that effort at the expense of the rest of the world is good.
        • spacechild133 minutes ago
          The US military is famous for purely acting in self defence...
        • cess112 hours ago
          That&#x27;s pretty bad.
          • mhb2 hours ago
            Sweden too. So there&#x27;s that.
    • insane_dreamer3 hours ago
      I don’t know about “good guys” but the fact that they seem to be highly focused on coding rather than general purpose chat bot (hard to overcome chatGPT mindshare there) they have a customer base that is more willing to pay for usage and therefore are less likely to need to add an ad revenue stream. So yes so far I would say they are on stronger ground than the others.
    • marxisttemp4 hours ago
      I think I’m not allowed to say what I think should happen to anyone who works with Palantir.
      • fragmede2 hours ago
        Maybe you could use an LLM to clean up what you want to say
  • politelemon3 hours ago
    This will be an amusing post to revisit in the internet archives when or if they do introduce ads in the future but dressed up in a different presentation and naming. Ultimately the investors will come calling.
    • strange_quark3 hours ago
      History is littered with challenger companies chest thumping that they’re never going to do the bad thing, then doing the bad thing like a year later.
      • FeteCommuniste3 hours ago
        &quot;Don&#x27;t be evil.&quot;
        • schmidtleonard2 hours ago
          &gt; The goals of the advertising business model do not always correspond to providing quality search to users.<p>- Sergey Brin and Lawrence Page, The Anatomy of a Large-Scale Hypertextual Web Search Engine, 1998
        • mirekrusin2 hours ago
          &quot;OpenAI&quot;
        • inquirerGeneral1 hour ago
          [dead]
    • rafaelmn1 hour ago
      They are using this to virtue signal - but in reality it&#x27;s just not compatible with their businesses model.<p>Anthropic is mainly focusing on B2B&#x2F;Enterprise and tool use cases, in terms of active users I&#x27;d guess Claude is distant last, but in terms of enterprise&#x2F;paying customers I wouldn&#x27;t be surprised if they were ahead of the others.
      • madeofpalk1 hour ago
        See Github, which doesn&#x27;t have display advertising.
    • yolostar13 hours ago
      History shows that software companies with large chunk of their platform being Free to Use mainly survive thanks to Ads.
      • Forgeties793 hours ago
        It goes well beyond free to use models unfortunately.
    • giancarlostoro2 hours ago
      I believe Perplexity is doing this already, but specifically for looking up products, which is how I use AI sometimes. I am wondering how long before eBay, Amazon etc partner with AI companies to give them more direct API access so they can show suggested products and what not. I like how AI can summarize things for me when looking up products, then I open up the page and confirm for myself.
    • tiffanyh3 hours ago
      Won&#x27;t all the ad revenue come from commerce use cases ... and they seem to be excluding that from this announcement:<p>&gt; <i>AI will increasingly interact with commerce, and we look forward to supporting this in ways that help our users. We’re particularly interested in the potential of agentic commerce</i>
      • observationist3 hours ago
        Why bother with ads when you can just pay an AI platform to prefer products directly? Then every time an agentic decision occurs, the product preference is baked in, no human in the loop. AdTech will be supplanted by BriberyTech.
        • keeganpoppen2 hours ago
          if llm ads become a real thing, let’s acknowledge that this is exactly what will happen in no uncertain terms.
          • observationist2 hours ago
            The only chance of that happening is if Altman somehow feels sufficiently shamed into abandoning the lazy enshittification track to monetization.<p>I don&#x27;t think they have an accurate model for what they&#x27;re doing - they&#x27;re treating it like just another app or platform, using tools and methods designed around social media and app store analytics. They&#x27;re not treating it like what it is, which is a completely novel technology with more potential than the industrial revolution for completely reshaping how humans interact with each other and the universe, fundamentally disrupting cognitive labor and access to information.<p>The total mismatch between what they&#x27;re doing with it to monetize and what the thing actually means to civilization is the biggest signal yet that Altman might not be the right guy to run things. He&#x27;s savvy and crafty and extraordinarily good at the palace intrigue and corporate maneuvering, but if AdTech is where they landed, it doesn&#x27;t seem like he&#x27;s got the right mental map for AI, for all he talks a good game.
            • bluGill1 hour ago
              There are a number of different llms - no reason they all need to do things the same. If you are replacing web search then ads are probably how you earn money. However if you are replacing the work people do for a company it makes more sense to charge for the work. I&#x27;m not sure if their current token charges are the right one, but it seems like a better track.
    • keeganpoppen2 hours ago
      yeah it’s either that or openai has effected a massive own-goal… im leaning toward your view, but hoping that prediction does not manifest. i would be fine with all sorts of shit in life being more expensive but ad-free… but this is certainly a priviledged take and i recognize that.
    • disease3 hours ago
      My thoughts exactly. They are using the Google playbook of &quot;don&#x27;t be evil&quot; until it becomes extremely profitable to be evil.
    • suprstarrd1 hour ago
      [dead]
    • water-data-dude2 hours ago
      You really think the giant ad company would put ads into their product after saying they won&#x27;t? You should strive to be less cynical.
  • jonathaneunice8 minutes ago
    Sometimes posts like this are just value-signaling. I hear a lot of cynicism and &quot;just you wait, the other shoe will drop&quot; comments along those lines.<p>But combined with the other projects Anthropic has pursued (e.g. around understanding bias and explaining &quot;how the model is thinking as it is&quot;) and decisions it has made, I&#x27;m happy with the course they&#x27;re plotting. They seem consistently upstanding, thoughtful, and respectful. I want to commend them and earnestly say: Keep up the good work!
  • sdellis55 minutes ago
    The key hurdle for AI to leap is establishing trust with users. No one trusts the big players (for good reason) and it is causing serious anxiety among the investors. It seems Claude acknowledges this and is looking to make trust a critical part of their marketing messaging by saying no ads or product placement. The problem is that serving ads is only one facet of trust. There are trust issues around privacy, intellectual property, transparency, training data, security, accuracy, and simply &quot;being evil&quot; that Claude&#x27;s marketing doesn&#x27;t acknowledge or address. Trust, on the scale they need, is going to be very hard for any of them to establish, if not impossible.
    • jstummbillig33 minutes ago
      What do you mean? Google is roughly the most trusted organization in the world by revealed preference. The 800(?) million ChatGPT users – I have a hard time reading that as a trust problem.
    • popalchemist44 minutes ago
      Impossible. The only way to know what is happening is to have the code run on your own infra.
  • sdrinf3 hours ago
    Besides the editorial control -which openai openly flagged to want to remain unbiased- there is a deeper issue with ads-based revenue models in AI: that of margins. If you want ads to cover compute &amp; make margins -looking at roughly $50 ARPU at mature FB&#x2F;GOOG level- you have two levers: sell more advertisement, or offer dumber models.<p>This is exactly what chatgpt 5 was about. By tweaking both the model selector (thinking&#x2F;non-thinking), and using a significantly sparser thinking model (capping max spend per conversation turn), they massively controlled costs, but did so at the expense of intelligence, responsiveness, curiosity, skills, and all the things I&#x27;ve valued in O3. This was the point I dumped openai, and went with claude.<p>This business model issue is a subtle one, but a key reason why advertisement revenue model is not compatible (or competitive!) with &quot;getting the best mental tools&quot; -margin-maximization selects against businesses optimizing for intelligence.
    • serjester2 hours ago
      The vast majority of people don&#x27;t need smarter models and aren&#x27;t willing to pay for a subscription. There&#x27;s an argument to be made that ads on free users will subsidize the power users that demand frontier intelligence - done well this could increase OpenAI&#x27;s revenue by an order of magnitude.<p>This is going to be tough to compete against - Anthropic would need to go stratospheric with their (low margin) enterprise revenue.
  • javier_e062 hours ago
    They are not trying to sell adds. They are trying to sell themselves as a monthly service. That is what I think when they are trying to convince me to go there to think. I rather go think at Wikipedia.
    • conductr2 hours ago
      Idk, brainstorming and ideating is my main use case for AI<p>I use it as codegen too but I easily have 20x more brainstorming conversations than code projects<p>Most non-tech people I talk to are finding value with it with traditional things. The main one I&#x27;ve seen flourish is travel planning. Like, booking became super easy but full itinerary planning for a trip (hotels, restaurants, day trips&#x2F;activities, etc) has been largely a manual thing that I see a lot of non-tech people using llms for. It&#x27;s very good for open ended plans too, which the travel sites have been horrible at. For instance, &quot;I want to plan a trip to somewhere warm and beachy I don&#x27;t care about the dates or exactly where&quot; maybe I care about the budget up front but most things I&#x27;m flexible on - those kinds of things work well as a conversation.
    • derektank1 hour ago
      Wikipedia is, of course very useful, but what it’s not good at is surfacing information I am unfamiliar with. Part of this problem is that Wikipedia editors are more similar to me, and more interested in similar things to me, than the average person writing text that appears online. Part of the problem is that the design of Wikipedia does not make it easy to stumble upon unexpected information; most links are to adjacent topics given they have to be relevant to the current article. But regardless, I’m much more likely to come across a novel concept when chatting with Claude, compared to browsing Wikipedia.
    • nerdsniper2 hours ago
      It’s so hard to succeed without selling ads. There’s an exponential growth aspect to these endeavors and ads add a lot of revenue, which investors like, so those who don’t can find that the lost revenue “multiplies” due to lower outside investment, lower stock price growth, etc.<p>I wish the financial aspects were different, because Anthropic is absolutely correct about ads being antithetical to a good user experience.
      • crthpl1 hour ago
        Anthropic is very big (the biggest AI co?) in B2B, where you don&#x27;t have ads. Also, if they end up creating a datacenter full of geniuses, ads won&#x27;t make sense either.
  • seydor2 hours ago
    They made an ad to say that they won&#x27;t have ads, i dont know if they are aware of the irony.<p><a href="https:&#x2F;&#x2F;x.com&#x2F;ns123abc&#x2F;status&#x2F;2019074628191142065" rel="nofollow">https:&#x2F;&#x2F;x.com&#x2F;ns123abc&#x2F;status&#x2F;2019074628191142065</a><p>In any case, they draw undue attention to openAI rather than themselves. Not good advertising<p>Both openAI and Anthropic should start selling compute devices instead. There is nothing stoping open-source LLMs from eating their lunch mid-term
    • bananaflag1 hour ago
      Ads as a concept are not evil. There have been ads since prehistory.<p>Littering a potentially quality product with ads which one cannot easily separate is what the evil is.
  • simianwords5 hours ago
    I always found Anthropic to be trying hard to signal as one of the &quot;good guys&quot;.<p>I wonder how they can get away without showing Ads when ChatGPT has to be doing it. Will the enterprise business be that profitable that Ads are not required?<p>Maybe OpenAI is going for something different - democratising access to vast majority of the people. Remember that ChatGPT is what people know about and what people use the free version of. Who&#x27;s to say that making Ads by doing this but also prodiding more access is the wrong choice?<p>Also, Claude holds nothing against ChatGPT in search. From my previous experiences, ChatGPT is just way better at deep searches through the internet than Claude.
    • Etheryte2 hours ago
      ChatGPT is providing a ridiculous amount of free service to gain&#x2F;keep traction. Others also have free tiers, but to a much lesser extent. It&#x27;s similar to Uber selling rides at a loss to win markets. It will get you traction, yes, but the bill has to be paid one day.
      • timpera20 minutes ago
        Even when you&#x27;re subscribed, they&#x27;re providing unreasonable amounts of compute for the price. I am subscribed to both Claude and ChatGPT, and Claude&#x27;s limits are so tiny compared to ChatGPT&#x27;s that it often feels like a rip-off.
    • insane_dreamer3 hours ago
      Clause isn’t trying to compete with OpenAI in the general consumer chat bot space.
      • alt2273 hours ago
        None of the ai companies are, they are all looking for those multi billion deals to provide the backplane for services like Copilot and Siri. Consumer chatbots are pure marketing, no company is going to make anything off those $20 per month subs to ai chatbots.
  • dbgrman2 hours ago
    100%. Love this approach by Anthropic. The Meta &quot;monetization league&quot; is assembling at OpenAI and doing what they&#x27;ve done best at Meta.<p>However, I do think we need to take Anthropic&#x27;s word with a grain of salt, too. To say they&#x27;re fully working in the user&#x27;s interest has yet to be proven. This trust would require a lot of effort to be earned. Once the companies intends to or becomes public, incentives change, investors expect money and throwing your users under the bus is a tried and tested way of increasing shareholder value.
  • big_toast2 hours ago
    I asked for this last week in an hn comment and people were pretty negative about it in the replies.<p>But I’m happy with position and will cancel my ChatGPT and push my family towards Claude for most things. This taste effect is what I think pushes apple devices into households. Power users making endorsements.<p>And I think that excess margin is enough to get past lowered ad revenue opportunity.
  • mynti9 hours ago
    I think this says a lot about the business approach of Anthopic compared to OpenAI. Just the vast amount of free messages you get from OpenAI is crazy that turning a profit with that seems impossible. Anthropic is growing more slowly but it seems like they are not running a crazy deficit. They do not need to put ads or porn in their chatbot
  • raahelb7 hours ago
    &gt; Anthropic is focused on businesses, developers, and helping our users flourish. Our business model is straightforward: we generate revenue through enterprise contracts and paid subscriptions, and we reinvest that revenue into improving Claude for our users. This is a choice with tradeoffs, and we respect that other AI companies might reasonably reach different conclusions.<p>Very diplomatic of them to say &quot;we respect that other AI companies might reasonably reach different conclusions&quot; while also taking a dig at OpenAI on their youtube channel<p><a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=kQRu7DdTTVA" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=kQRu7DdTTVA</a>
  • jstummbillig3 hours ago
    I appreciate taking a stance, even if nobody is asking. It would be great if it was less of a bad faith effort.<p>It&#x27;s great that Anthropic is targeting the businesses of the world. It&#x27;s a little insincere to than declare &quot;no ads&quot;, as if that decision would obviously be the same if the bulk of their (not paying) users.<p>There are, as far as ads go, perfectly fine opportunities to do them in a limited way for limited things within chatbots. I don&#x27;t know who they think they are helping by highlighting how to do it poorly.
  • czk35 minutes ago
    i spend most of my time with claude thinking about when my daily usage limit is going to reset
  • s3p1 hour ago
    Good on Anthropic! I appreciate how deliberate they are on maintaining user trust. Have preferred Claude&#x27;s responses more through the API, so I don&#x27;t imagine this would have affected me as much but it is still nice to see.
  • titzer2 hours ago
    &gt; There are many good places for advertising. A conversation with Claude is not one of them.<p>&gt; ...but including ads in conversations with Claude would be incompatible with what we want Claude to be: a genuinely helpful assistant for work and for deep thinking.<p>Sadly, with my disillusionment with the tech industry, plus the trend of the past 20 years, this smacks of Larry Page&#x27;s early statements about how bad advertising could distort search results and Google would never do that. Unsurprisingly, I am not able to find the exact quote with Google.
    • Trufa2 hours ago
      Yeah, it’s a shame we’ve all grown so jarred, I do see this better than nothing.<p>In this animal farm Orwellian cycle we’ve been going through, at least they start here, unlike others.<p>I for one commend this, but stay vigilant.
  • Imnimo1 hour ago
    &gt;An advertising-based business model would introduce incentives that could work against this principle.<p>I agree with this - I&#x27;m not so much worried that ChatGPT is going to silently insert advertising copy into model answers. I&#x27;m worried that advertising alongside answers creates bad incentives that then drive future model development. We saw Google Search go down this path.
  • tolerance1 hour ago
    Anthropic probably saw how much money they made off of the Moltbot hype and figured that they don’t need ad revenue. They can go a step further and build a marketplace for similar setups, paying the developers who make them in micro transactions per tokens.
  • ptx1 hour ago
    So they have &quot;made a choice&quot; to keep Claude ad-free, they say. &quot;Today [...] Claude’s only incentive is to give a helpful answer&quot;, they say. But there&#x27;s nothing that suggests that they can&#x27;t make a different choice tomorrow, or whenever it suits them. It&#x27;s not profitable to betray your trust too early.
    • jhickok1 hour ago
      I can&#x27;t really imagine any statement they could give that would ease concerns that at some point in time they change their mind. But for now, it is a relief to read, even if this is a bit of marketing. The longer it goes without being enshittified the better.
  • smusamashah2 hours ago
    Claude have posted on number of very sarcastic videos on twitter that take a jibe at ads <a href="https:&#x2F;&#x2F;x.com&#x2F;claudeai&#x2F;status&#x2F;2019071118036942999" rel="nofollow">https:&#x2F;&#x2F;x.com&#x2F;claudeai&#x2F;status&#x2F;2019071118036942999</a> with an ending line &quot;Ads are coming to IA. But not to Claude.&quot;
  • nasorenga1 hour ago
    It&#x27;s nice that they don&#x27;t show ads in conversations with Claude - but I wonder if they collect profiling information from my prompts and activities to sell to advertising firms.
  • kaffekaka2 hours ago
    Sure, ad free forever, until it is not.<p>Great by Anthropic, but I put basically no long term trust in statements like this.
  • rishabhaiover3 hours ago
    What makes Anthropic seem like early Apple is not just the unique taste, but the courage to stand firm with their vision of what the product should be.
    • nickthegreek2 hours ago
      That courage was nowhere to be found when Palantir rolled up with a truckload of cash.
      • rendang50 minutes ago
        What&#x27;s the problem with Palantir?
    • mizuki_akiyama2 hours ago
      It’s better to not fall for serif fonts and warm colors.
    • CamperBob21 hour ago
      Apple had a vision, all right. It was our fault that we thought they would become the rebel with the hammer, and not the guy on the screen.
    • gowld3 hours ago
      Only 4 years old, they haven&#x27;t existed long enough to be &quot;firm&quot;.
      • mcherm2 hours ago
        Making formal, public statements like this is a good start. It is certainly better than NOT making these sorts of statements.
      • baal80spam2 hours ago
        Yeah. Does anyone remember how long did it take GOOG to remove &quot;Don&#x27;t be evil&quot; from their motto?
    • kingkongjaffa2 hours ago
      &gt; Anthropic seem like early Apple<p>sorry but this is silly, nothing suggests this at all.
  • erelong3 hours ago
    Don&#x27;t understand why more companies don&#x27;t just make ads opt-in as a trade for more features<p>A lot of people are ok with ad supported free tiers<p>(Also is it possible to do ads in a privacy respecting way or do people just object to ads across the board?)
    • derektank2 hours ago
      I would object to ads across the board in this case (though I’m generally fine with even targeted ads). It would create a customer-client relationship between companies paying to advertise and the AI company, creating an incentive for Anthropic to manipulate the Claude service on their behalf. As an end user that seeks input from Claude on purchasing decisions, I do not want there to be any question as to whether or not it was subtly manipulated.
  • tiffanyh4 hours ago
    What other interaction models exist for Claude given that Anthropic seems to be stressing so much that this is for &quot;conversations&quot;?<p>(Props for them for doing this, don&#x27;t know how this is long-term sustainable for them though ... especially given they want to IPO and there will be huge revenue&#x2F;margin pressures)
  • hansmayer26 minutes ago
    Since when does HN welcome blatant self-advertising posts like this one ?
  • MagicMoonlight1 hour ago
    That’s positive. How is Claude? Is it censorship heavy?
    • derektank1 hour ago
      If you broach subjects Anthropic considers sensitive (cyber security, dangerous biotech, etc) Claude is very likely to shut you down completely and refuse to answer. As someone that works in cybersecurity and uses Claude daily, it is annoying to ask a question regarding some feature of Cobalt Strike and have it refuse to answer, even though the tool’s documentation is public. I would have cancelled my ChatGPT subscription at this point if once or twice a month I didn’t need to ask it to look up something when Claude refuses.
      • golem1431 minutes ago
        How are the Chinese models in this regard? Qwen3 for instance?
  • cm20122 hours ago
    Claude focuses on enterprise and B2B rather than mass consumer, so it makes sense for them.
  • falloutx1 hour ago
    Claude is the last place where thinking happens.
  • deafpolygon40 minutes ago
    I really want to applaud Anthropic; I remain cautiously optimistic, but I’m not certain how long they will maintain this posture. I will say that the recent announcement from OpenAI has put me off from ChatGPT — I use Gemini occasionally, because it’s the devil I know. OpenAI has gone back and forth on their positions so many times in a way that feels truly hostile to their users.<p>Plus, I’m not a huge fan of Sam Altman.
  • tizzzzz5 hours ago
    That&#x27;s true. CI in all of my conversations with AIThat&#x27;s true. In all my conversations with AI, I think CIaude&#x27;s thinking is the richest.
  • yakkomajuri1 hour ago
    RemindMe! 2 years
  • JoshPurtell3 hours ago
    Important to note Anthropic has next to no consumer usage
    • Der_Einzige3 hours ago
      Wrong (in trumps voice)
      • JoshPurtell1 hour ago
        From Sama &quot;More Texans use ChatGPT for free than total people use Claude in the US, so we have a differently-shaped problem than they do&quot;<p>Facts don&#x27;t care about your feelings
  • catigula2 hours ago
    Does the veneer of goodness despite (alleged) cutthroat business practices from Anthropic bother anyone else?
  • ChrisArchitect5 hours ago
    So apparently they&#x27;re going to run a Super Bowl ad about ChatGPT having ads (without saying ChatGPT of course)........ Has doing an ad that focuses only on something about your competitor ever been the best play? Talk about yourself.<p>Obviously it&#x27;s a play, honing in on privacy&#x2F;anti-ad concerns, like a Mozilla type angle, but really it&#x27;s a huge ad buy just to slag off the competitors. Worth the expense just to drive that narrative?<p>Ads playlist <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;playlist?list=PLf2m23nhTg1OW258b3XBiJME7tgrRk-KI" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;playlist?list=PLf2m23nhTg1OW258b3XBi...</a>
    • badsectoracula4 hours ago
      Wasn&#x27;t Apple&#x27;s iconic 1984 ad basically that?
      • gowld3 hours ago
        Apple&#x27;s ad had a woman dressed like a Hooter&#x27;s waitress to represent themselves. That makes themselves the focus of attention.<p><a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=ErwS24cBZPc" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=ErwS24cBZPc</a>
      • ChrisArchitect4 hours ago
        ah, good one. Was it Big Blue or Big Brother in general being referenced in that one? Either way I suppose Apple didn&#x27;t even say much of anything about their product in that one where Anthropic is at least highlighting a feature.
  • electsaudit0q8 hours ago
    [dead]
    • satvikpendem4 hours ago
      &gt; <i>its not about getting answers, its about having a patient collaborator</i><p>Looks like you&#x27;re picking up LLM speak too!<p><a href="https:&#x2F;&#x2F;www.theverge.com&#x2F;openai&#x2F;686748&#x2F;chatgpt-linguistic-impact-common-word-usage" rel="nofollow">https:&#x2F;&#x2F;www.theverge.com&#x2F;openai&#x2F;686748&#x2F;chatgpt-linguistic-im...</a>
      • cwnyth1 hour ago
        You mean human speech that LLMs were modeled after? Or was JFK influenced by LLMs, too?
        • satvikpendem1 hour ago
          There are certainly more tells and more people are speaking like LLMs now where they hadn&#x27;t before.