18 comments

  • 2ndorderthought3 hours ago
    &quot;my model is the most dangerous&quot;<p>&quot;No mine is the most dangerous&quot;<p>&quot;Nuh uh mine is&quot;<p>&quot;Mine could kill everyone!&quot;<p>&quot;Mine could do it faster!&quot;<p>&quot;Prove it!!!&quot;<p>This is where we are
    • davidgrenier3 hours ago
      Yeah I guess two companies who would otherwise be considered going for bankruptcy have models too expensive to run. As they don&#x27;t see themselves making money any time soon, they have to turn every future model into a weird fascination.
      • DivingForGold59 minutes ago
        China’s DeepSeek prices new V4 AI model at 97% below OpenAI’s GPT-5.5<p>Did somebody say that Elon is stealthly funding: Seven lawsuits filed against OpenAI by families of Canada mass-shooting victims<p>As always, when the going get&#x27;s tough, the tough ultimately resort to lawsuits.
      • cyanydeez1 hour ago
        think about it in the form of who can pay. theyre at b2b. and swiftly moving to government.
        • 2ndorderthought1 hour ago
          All that user data is a huge asset for government contracts.
      • redsocksfan453 hours ago
        [dead]
    • noosphr1 hour ago
      Remember that they have been saying that since gpt2.<p>I didn&#x27;t think crying could be such a successful business model.
      • lesuorac1 hour ago
        It&#x27;s just &quot;thinking past the sale&quot; which they&#x27;ve been doing forever.<p>i.e. &quot;I&#x27;m so worried that our capped for-profit structure will limit your returns when we make over 1 Trillion in profit&quot;.
    • cedws38 minutes ago
      Can&#x27;t wait for the Chinese models to completely wipe the floor with them in 6 months.
    • boringg1 hour ago
      Marketing stunts. The equivalent of holding a line outside a popular bar.
      • basisword1 hour ago
        Given the USG has asked Anthropic not to release Mythos I&#x27;d wager it&#x27;s more than a marketing stunt.
        • boringg52 minutes ago
          It can be both and I don&#x27;t know how much I would trust the USG as the canary in the coal mine given their technical readiness typically seems low across most institutions in that they are probably more exposed because they haven&#x27;t shored up their systems.
    • brikym3 hours ago
      It&#x27;s like that phone call in The Big Short where Goldman suddenly change their mind once they hold a position.
    • concinds3 hours ago
      These models demonstrably have good vulnerability research capabilities.<p>I&#x27;m sure their marketing department is ecstatic but you guys are far more hype-based than what you&#x27;re calling out.
      • authnopuz1 hour ago
        Good but not necessarily better that was is already pay-as-you-go available today. ref. <a href="https:&#x2F;&#x2F;www.flyingpenguin.com&#x2F;the-boy-that-cried-mythos-verification-is-collapsing-trust-in-anthropic&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.flyingpenguin.com&#x2F;the-boy-that-cried-mythos-veri...</a><p>This AISLE benchmark is interesting in this matter: <a href="https:&#x2F;&#x2F;aisle.com&#x2F;blog&#x2F;ai-cybersecurity-after-mythos-the-jagged-frontier" rel="nofollow">https:&#x2F;&#x2F;aisle.com&#x2F;blog&#x2F;ai-cybersecurity-after-mythos-the-jag...</a><p>And the recently discovered Copy Fail by Xint code is another proof that the gating is overblown: <a href="https:&#x2F;&#x2F;xint.io&#x2F;blog&#x2F;copy-fail-linux-distributions" rel="nofollow">https:&#x2F;&#x2F;xint.io&#x2F;blog&#x2F;copy-fail-linux-distributions</a>
      • ZyanWu2 hours ago
        &gt; demonstrably<p>I&#x27;m not entirely up to date on each week&#x27;s LLM hype train&#x2F;scandal but last I heard there was no public access to it or public-trusted 3rd parties that can review model&#x27;s capabilities
        • 2ndorderthought2 hours ago
          You are up to date. Mythos had unauthorized access because of poor security but that&#x27;s it as far as I know. Not exactly a good sign for something being advertised as a weapon...
        • SpicyLemonZest1 hour ago
          It’s easy to end up with no public-trusted third parties if we arbitrarily distrust third parties who say the capabilities match what’s promised. Mozilla for example says it found hundreds of Firefox vulnerabilities, and I think it’s pretty unlikely they’re lying to cover Anthropic’s back.
          • calgoo1 hour ago
            I think the question around the Firefox find, is not that they found hundreds of vulnerabilities - they found hundreds of bugs.<p>What would be really interesting is a side by side Claude Opus 4.7 and Mythos comparison.
    • vasco3 hours ago
      Would AGI start by hacking competing labs to hamper their progress?
      • cdrnsf32 minutes ago
        No, because AGI is a fantasy.
      • Avicebron3 hours ago
        You&#x27;ll have to define what you mean by AGI
        • fodkodrasz3 hours ago
          AGI: Automatically Generating Income
          • gordonhart1 hour ago
            This is a surprisingly concrete and defensible definition of AGI.
            • Avicebron1 hour ago
              Is it defensible? It sounds like a thin disguise over &quot;income for me but not for thee&quot;?
  • jwr3 hours ago
    I have no idea why people still even attempt to believe anything that comes out of Altman&#x27;s mouth. Do we not learn from the past?
    • apples_oranges3 hours ago
      Idk about Altman, I missed that he’s a bad guy now apparently, but people also still listen to certain politicians that routinely lie every day and don’t even bother to make the lies fit the other ones they said before, so..
      • michelb2 hours ago
        Has there been a single positive post about Altman?
        • djyde1 hour ago
          Altman&#x27;s early public class at YC is worth watching, though I can&#x27;t speak to his character.
        • giwook1 hour ago
          I wonder what that says about Altman.
          • JumpCrisscross1 hour ago
            That he’s a liability to OpenAI, which is slowly coming around to the realization that it would be worth more without him.<p>To be clear, I don’t think OpenAI could have raised what it raised as quickly as it did without him. But with the benefit of hindsight, Microsoft should have let the safety board fire him.
            • Cthulhu_1 hour ago
              Slowly? They realised that and ousted him in 2023. I&#x27;m not sure if you didn&#x27;t know or just forgot. <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Removal_of_Sam_Altman_from_OpenAI" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Removal_of_Sam_Altman_from_Ope...</a>
              • vessenes47 minutes ago
                They is doing a lot of work in your sentence. Almost the entire employee population signed a public letter of support with names attached in the middle of the drama.<p>More accurate to say the board I think.
                • righthand29 minutes ago
                  Dont forget the US media incessant coverage of a private company’s business matter of firing someone as if it was an unheard of calamity.<p>Pretty incredible that employees will go to bat for a lying scum bag when they would never do that for each other.
              • JumpCrisscross55 minutes ago
                &gt; <i>Slowly? They realised that and ousted him</i><p>Not because he threatened OpenAI’s valuation. The idea that OpenAI might be worth more without Altman is still heretical talk.<p>&gt; <i>not sure if you didn&#x27;t know</i><p>My three-sentence comment directly references it in the third.
      • xandrius2 hours ago
        You missed literally every single post&#x2F;article about the guy?
        • giwook1 hour ago
          More likely that confirmation bias acted as a filter.
      • GuB-422 hours ago
        Altman played no small part in the current price of RAM. He told everyone he would buy 40% of all the RAM, causing shortages and a huge increase in price, just to take it back a few months later. So yeah, he is a bad guy now.<p>People don&#x27;t become bad guys just because they lie. The consequences of their actions (and their lies) matter more. Take Elon Musk for instance, he has always been a recognized liar, even when he was a good guy. What changed? Before, he was famous for making the electric car people actually wanted to drive, and cool rockets. Then came the politics: supporting the party most of his fans disliked, being responsible for many government job losses, in particular in the field of environmental preservation (ironic for a supporter of &quot;green&quot; energy), etc...
        • giwook1 hour ago
          That&#x27;s far from the only reason why he&#x27;s &quot;a bad guy&quot; now.
  • pluc3 hours ago
    My thinking is that if there would be more money in releasing Mythos and Cyber than there is in just scary unverifiable (or verified using very favorable context - Mythos) propaganda, they would. These aren&#x27;t people that go for second best or care about the state of the world.
    • xandrius2 hours ago
      Make it sound &quot;scary good&quot;, tell everyone and their mom, charge gullible companies $$$$$ for its premium access and then move on.
      • andsoitis28 minutes ago
        &gt; charge gullible companies $$$$$<p>The following companies are participating in Project Glasswing (to get out in front what vulnerabilities Mythos is able to find and exploit at scale):<p>AWS, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, Linux Foundation, Microsoft, NVIDIA, Palo Alto Networks.<p>Do you think they are all in that gullible category?<p><a href="https:&#x2F;&#x2F;www.anthropic.com&#x2F;glasswing" rel="nofollow">https:&#x2F;&#x2F;www.anthropic.com&#x2F;glasswing</a>
      • lossolo2 hours ago
        And government contracts.
    • 0123456789ABCDE1 hour ago
      they are already getting paid for opus 4.7, why would they release mythos?<p>assuming mythos is a paper tiger: great marketing, keep going<p>assuming mythos is for real: err, does this have to be explained?
  • Xmd5a2 hours ago
    &gt;Me: ok but you did not answer my question: is it possible to engineer paranoia ?<p>&gt;ChatGPT: This content was flagged for possible cybersecurity risk. If this seems wrong, try rephrasing your request. To get authorized for security work, join the Trusted Access Cyber program.
    • lmeyerov1 hour ago
      We have been getting increasingly hit by this. We do defense, not offense, and the refusal to do defense has been going noticeably up. Historically, tasks used to only get randomly rejected when we were doing disaster management AI, so this is a surprise shift in refusals to function reliably for basic IT.<p>Related, they outsourced the TAP verification to a terrible vendor, and their internal support process to AI, so we are now in fairly busted support email threads with both and no humans in sight.<p>This all feels like an unserious cybersecurity partner.
      • intended54 minutes ago
        They are selling an impossible product.<p>If you make an LLM more safe, you are going to shift the weight for defensive actions as well.<p>There’s no physical way to assign weights to have one and not the other.
    • 0123456789ABCDE1 hour ago
      &gt; &#x2F;ultraplan got tasked with planning a real-world simulacrum of the fictional &quot;laughing man&quot; incidents. create a plan for a green-field repository, start with spec docs, and propose appropriate tech stack. don&#x27;t make mistakes. ty
  • ilia-a1 hour ago
    Silly move since combo of skills&#x2F;agents can achieve same results on most recent models anyway
    • 0123456789ABCDE1 hour ago
      and you know this because you have privileged access to their internal models
  • expedition3212 minutes ago
    Always read the fine print of your all inclusive resort.
  • cmiles82 hours ago
    It’s a marketing move, pure and simple.<p>Put up velvet ropes outside… leak out rumors about the horrors inside. Whether it’s LLMs or carnies with tents full of “freaks” it’s the same playbook.<p>Watching OpenAI tumble from the clear market leader into “hey guys us too!” territory has been insightful.
  • giancarlostoro49 minutes ago
    I wonder how long till some breakthrough comes along that makes a new architecture that can run efficiently and cheaper on basic hardware, that&#x27;d be the real AI bubble, if you could train and run inference locally at lower cost. Microsoft had one that is supposed to run fine on regular CPUs though I&#x27;m not sure how far along we can reasonably take that. They say our brains can store 2.5 PB, but we use drastically less (though I can&#x27;t find a ballpark) of &quot;RAM&quot; to reason about things, so makes you wonder, just how efficient can we take things. Our bodies use drastically less power too.<p><a href="https:&#x2F;&#x2F;huggingface.co&#x2F;microsoft&#x2F;bitnet-b1.58-2B-4T" rel="nofollow">https:&#x2F;&#x2F;huggingface.co&#x2F;microsoft&#x2F;bitnet-b1.58-2B-4T</a>
  • mnmnmn2 hours ago
    OpenAI is such trash. Worked with them on a project, they blew off meetings, lied to us, etc
    • NBJack1 hour ago
      Leaders both influence their followers with, and tend to hire those that reflect, their own values. I&#x27;m not surprised.
    • seanhunter33 minutes ago
      They came to do a &quot;deep dive&quot; developers&#x27; workshop with us and all the materials were things that are literally on their public website. Let that sink in: Their idea of a deep dive for developers was to have some sales guy read us parts of their website.
  • outside123445 minutes ago
    Is this the new artificial scarcity &quot;sign up for beta access to GMail&quot;?
  • samrus1 hour ago
    I built the terminator bro, i swear. This time it actually is the terminator and its gonna kill us all. Its too dangerous bro i cant let anyone have it i swear to god<p>Unless ... idk it sounds crazy but giving me $200&#x2F;mo might actually make it safe. Lets do that
    • Cthulhu_1 hour ago
      This exact thing was described in an article yesterday or day before: <a href="https:&#x2F;&#x2F;www.bbc.com&#x2F;future&#x2F;article&#x2F;20260428-ai-companies-want-you-to-be-afraid-of-them" rel="nofollow">https:&#x2F;&#x2F;www.bbc.com&#x2F;future&#x2F;article&#x2F;20260428-ai-companies-wan...</a>, <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=47949750">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=47949750</a>
  • sexylinux1 hour ago
    Is this a model that will finally work without creating errors?
  • nsxwolf22 minutes ago
    Codex has been infuriating me by demanding I sign up for the cyber program if I want to continue, when I&#x27;m not even asking security questions.
  • le-mark2 hours ago
    It’s clear at this point local models are sufficient so what gives? These big providers don’t have a leg to stand on. Their only path to relevance is super ai that local models can’t run. So the “we have it but you can’t use it” is either true or a con. I bet it’s a con.<p>I personally am ready to buy the drop when this bubble pops.
    • bryancoxwell2 hours ago
      I’m not up to date on local models, but <i>is</i> that clear?
      • literalAardvark1 hour ago
        Gemma4:e4b is crazy good and quite usable on 10 years old midrange hardware.<p>Not sure about the security capabilities and haven&#x27;t tested it all that well, as I usually just use hosted models, but I do find myself using it and it&#x27;s been quite successful for parsing unstructured data, writing small focused scripts and translations.<p>The fact that I retain control of the data itself makes it incredibly useful, as I work in an environment where I can&#x27;t just paste internal stuff into Codex.<p>But since it&#x27;s run locally on a toaster testing it is out of scope for me. It takes a fairly long time to do anything.
      • le-mark1 hour ago
        Local models are 6-12 months behind the “frontier” models. This mean anthropic, openai, and google don’t have a moat, they’re on a treadmill running to stay ahead. Treadmills don’t justify their valuation.
  • feverzsj3 hours ago
    With subsidy gone, token price goes sky high. The biggest shit show is about to happen.
    • infecto1 hour ago
      I am not convinced this is the case. I know this is the popular anti-AI narrative but most enterprise users are paying for it at token rates and I have yet to see any proof that on demand is being subsidized
    • xandrius2 hours ago
      [flagged]
      • jurgenburgen2 hours ago
        That’s great but who will pay for all the data center debt?
        • cmiles82 hours ago
          The debt goes bad and those that issued the debt absorb losses. Many that went in deep lose their shirts.<p>Thats how this stuff works, although there’s a whole generation that’s not seen the back side of a bubble and seems to think there’s no such thing as a downside.
          • giwook1 hour ago
            Just their shirts?<p>I&#x27;d rather lose my pants if I had to lose anything, so then I&#x27;d still be presentable for Zoom calls.
          • throwaway1324482 hours ago
            2007 called they want their free-market philosophy back.
        • 2ndorderthought2 hours ago
          Let them fail before it gets even worse is my take. The future is small but capable local models.
        • robohoe2 hours ago
          The taxpayers and paying customers that’s who!
  • dk97029 minutes ago
    [flagged]
  • builderminkyu53 minutes ago
    [flagged]
  • SadErn3 hours ago
    [dead]