10 comments

  • neogodless2 hours ago
    Related:<p><a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=47154983">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=47154983</a> <i>The Pentagon threatens Anthropic</i> (astralcodexten.com) ~1 day ago, 115+ comments
  • Jimmc4143 hours ago
    <a href="https:&#x2F;&#x2F;archive.is&#x2F;lvViA" rel="nofollow">https:&#x2F;&#x2F;archive.is&#x2F;lvViA</a>
  • franciscator2 hours ago
    Don&#x27;t get distracted. This technology is going to be used to kill people.
    • gilesvangruisen1 hour ago
      Always has been! Lots of math&#x2F;ML&#x2F;regressions already being used to make kill decisions: homing missiles, kamikaze drones, naval defense&#x2F;sentries. All use a combination of computer vision, signal classification, predictive tracking. LLMs just the latest solved math problem.
      • jvanderbot1 hour ago
        Wrote my PhD thesis on tracking invasive fish.<p>Not only was what we built <i>essentially</i>, a scout-to-kill drone, it was <i>also</i> built on a ton of tracking literature which was basically built to track things to kill. No matter how far back you go, the military has always been a huge player (supply or demand side) in R&amp;D.
    • josefritzishere25 minutes ago
      Not just people ...it will be used to kill Americans.
    • IncreasePosts1 hour ago
      Is that a good or a bad thing?
      • ryanisnan1 hour ago
        Are you serious? When you lower the cost of killing, nobody wins.
        • hedora1 hour ago
          It usually boils down to who controls the technology, not the absolute cost.<p>Centralized AI killbots with no safety controls are almost certainly bad.<p>Individually owned and controlled militias of defensive (and decentralized) AI killbots? Unclear.
          • NoGravitas1 hour ago
            The film &#x27;Slaughterbots&#x27; presents a scenario which <i>could</i> be either of those, but is implied to be the latter.
        • neoromantique1 hour ago
          You don&#x27;t lower the cost of killing by improved targeting, you lower it by thugs shooting people in broad daylight with no consequences.<p>I understand the argument that moving the decision making power to a black box would clear conscience of the operator, yadda yadda yadda, but newsflash, price of human life is falling so quick, that I think we&#x27;re far beyond the point where it matters.
          • Insanity1 hour ago
            Less severe than killing, you’re essentially describing the “broken windows” theory. <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Broken_windows_theory" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Broken_windows_theory</a>
        • decremental1 hour ago
          [dead]
    • BigTTYGothGF1 hour ago
      &quot;going to be&quot;?
    • mothballed2 hours ago
      Lol the CEO of Palantir already bragged that occasionally his enemies have to be killed.[]<p>It&#x27;s honestly wild watching companies with common investors, and when you dig into the details, their executives are bragging about killing their enemies. And then people argue that when surveillance is used to systematically individually stalk all of us it&#x27;s magically not illegal, even though if you did that to a bunch of your ex girlfriends tracking all their movements to work and the grocery store and argued &#x27;muh free speech to record&#x27; your ass would be in jail lickity split because there is a big difference between recording the public and stalking people while conspiring with people who are literally bragging about the killing of their enemies.<p>[] <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=G5gC_fParbY" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=G5gC_fParbY</a>
      • delfinom1 hour ago
        CEO of Palantir looks like he does lines of coke in the bathroom before speaking. I don&#x27;t know if anything that cokehead says can be taken seriously or true.
        • mothballed1 hour ago
          The year of that earnings call the stock went up 5x. He could have walked up naked and given an hallucinogenic induced speech and his investors would have lapped it up. This was a moment where a CEO was able to actually speak his mind without reproach. A rare glimpse where you learn what these surveillance companies are actually thinking.
    • tinfoilhatter2 hours ago
      Probably already is. I&#x27;m confident that whatever us plebs get access to is less capable than what nation states do.
  • sedivy942 hours ago
    Anthropic signed a $200 million contract with the world’s largest military and hadn’t considered if it would be used for military operations? When an article reads like fiction, I can’t help but assume there’s an entirely different political disagreement happening behind closed doors.
    • sigmar1 hour ago
      &gt;Anthropic has repeatedly asked defense officials to agree to guardrails that would restrict its AI model... also wants to ensure Claude is not used by the Pentagon for final targeting decisions in military operations without any human involvement, one source familiar with the negotiations said. Claude is not immune from hallucinations and not reliable enough to avoid potentially lethal mistakes, like unintended escalation or mission failure without human judgment, the person said.<p>They explicitly allow it to be used in military operations, just not killing people without a human in the loop<p>source: <a href="https:&#x2F;&#x2F;www.cbsnews.com&#x2F;news&#x2F;pentagon-anthropic-offer-ai-unrestricted-military-use-sources&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.cbsnews.com&#x2F;news&#x2F;pentagon-anthropic-offer-ai-unr...</a>
    • evanb2 hours ago
      They did consider it, got a contract that affirmed that the military would be bound by the same pre-existing terms of service as every other user, and want to resist the military&#x27;s pressure to renegotiate.<p>Surely that might be naive but the entire issue is that they want to stick to the original contract, which is of course the purpose of a contract in the first place.
    • mgraczyk2 hours ago
      The contract included the agreement, and the government is now trying to change the contract, hence the disagreement
  • noonething2 hours ago
    It&#x27;s been bad since 2015. All the A.I. companies are in on it now.
    • tinfoilhatter2 hours ago
      Military and intelligence agencies have been involved in Silicon Valley for its entire existence. They were instrumental in helping to create the first computers. The internet was a DARPA project. Facebook sprung up from a failed DARPA project called lifelog. Google received funding from the military industrial complex &#x2F; intelligence agencies early in its life. These companies have always had a dual-purpose, and I highly doubt the major players in AI are any different.
  • danesparza2 hours ago
    Are they still feuding? I thought it was a moot point: <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=47145963">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=47145963</a>
    • sebzim45001 hour ago
      The dispute is over the completely automated operation of lethal weapons powered by Anthropic&#x27;s products, it has nothing to do with AI safety.
  • FrustratedMonky2 hours ago
    Just ask at high level.<p>The military has a problem on limiting its ability to do mass surveillance of the US public. Why? Why would it have a problem with that limitation?<p>The issues in the contract under dispute are things we shouldn&#x27;t want the military doing.
  • dnautics54 minutes ago
    it&#x27;s a good sign? We are having these debates, and it&#x27;s aired in public?
  • yosito2 hours ago
    &quot;Anthropic had built its brand around promoting AI safety, emphasizing red lines it said it wouldn’t cross. Its usage guidelines contain strict limitations that prohibit Claude from facilitating violence, developing or designing weapons, or conducting mass surveillance.&quot;<p>I can&#x27;t say that I fully trust this at face value, but I will say, at least at face value, that this commitment to non-violence is something I wish more tech companies in history had made. Whether it&#x27;s an authentic commitment or just PR remains to be fully seen.
  • ChrisArchitect2 hours ago
    Related:<p><i>Hegseth gives Anthropic until Friday to back down on AI safeguards</i><p><a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=47140734">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=47140734</a><p><a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=47142587">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=47142587</a><p><i>Tech companies shouldn&#x27;t be bullied into doing surveillance</i><p><a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=47160226">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=47160226</a>