6 comments

  • cmiles816 minutes ago
    They need to be more worried about creating a viable economic model for the present AI craze. Right now there’s no clear path to making any of the present insanity a profitable endeavor. Yes NVIDIA is killing it, but with money pumped in from highly upside down sources.<p>Things will regulate themselves pretty quickly when the financial music stops.
  • nis0s55 minutes ago
    We don’t know what kind of insecure systems we’re dealing with here, and there’s a pervasive problem of incestuous dependencies in a lot of AI tech stacks, which might lead to some instability or security risks. Adversarial attacks against LLMs are just too easy. It makes sense to let states experiment and find out what works and doesn’t, both as a social experiment and technological one.
  • TheAceOfHearts2 hours ago
    Archive: <a href="https:&#x2F;&#x2F;archive.is&#x2F;j1XTl" rel="nofollow">https:&#x2F;&#x2F;archive.is&#x2F;j1XTl</a><p>I cannot help but feel that discussing this topic under the blanket term &quot;AI Regulation&quot; is a bit deceptive. I&#x27;ve noticed that whenever this topic comes up, almost every major figure remains rather vague on the details. Who are some influential figures actually advancing clearly defined regulations or key ideas for approaching how we should think about AI regulation?<p>What we should be doing is surfacing well defined points regarding AI regulation and discussing them, instead of fighting proxy wars for opaque groups with infinite money. It feels like we&#x27;re at the point where nobody is even pretending like people&#x27;s opinions on this topic are relevant, it&#x27;s just a matter of pumping enough money and flooding the zone.<p>Personally, I still remain very uncertain about the topic; I don&#x27;t have well-defined or clearly actionable ideas. But I&#x27;d love to hear what regulations or mental models other HN readers are using to navigate and think about this topic. Sam Altman and Elon Musk have both mentioned vague ideas of how AI is somehow going to magically result in UBI and a magical communist utopia, but nobody has ever pressed them for details. If they really believe this then they could make some more significant legally binding commitments, right? Notice how nobody ever asks: who is going to own the models, robots, and data centers in this UBI paradise? It feels a lot like Underpants Gnomes: (1) Build AGI, (2) ???, (3) Communist Utopia and UBI.
    • jasonsb2 hours ago
      &gt; I cannot help but feel that discussing this topic under the blanket term &quot;AI Regulation&quot; is a bit deceptive. I&#x27;ve noticed that whenever this topic comes up, almost every major figure remains rather vague on the details. Who are some influential figures actually advancing clearly defined regulations or key ideas for approaching how we should think about AI regulation?<p>There&#x27;s a vocal minority calling for AI regulation, but what they actually want often strikes me as misguided:<p>&quot;Stop AI from taking our jobs&quot; - This shouldn&#x27;t be solved through regulation. It&#x27;s on politicians to help people adapt to a new economic reality, not to artificially preserve bullshit jobs.<p>&quot;Stop the IP theft&quot; - This feels like a cause pushed primarily by the 1%. Let&#x27;s be realistic: 99% of people don&#x27;t own patents and have little stake in strengthening IP protections.
      • lelanthran28 minutes ago
        &gt; This shouldn&#x27;t be solved through regulation. It&#x27;s on politicians to help people adapt to a new economic reality, not to artificially preserve bullshit jobs.<p>They already do this[1]. Why should there be an exception carved out for AI type jobs?<p>------------------------------<p>[1] What do you think tariffs are? Show me a country without tariffs and I&#x27;ve show you a broken economy with widespread starvation and misery.
      • phyzix57611 hour ago
        &gt; &quot;Stop AI from taking our jobs&quot; - This shouldn&#x27;t be solved through regulation. It&#x27;s on politicians to help people adapt to a new economic reality, not to artificially preserve bullshit jobs.<p>This is a really good point. If a country tries to &quot;protect&quot; jobs by blocking AI, it only puts itself at a disadvantage. Other countries that don&#x27;t pass those restrictions will produce goods and services more efficiently and at lower cost, and they’ll outcompete you anyway. So even with regulations the jobs aren&#x27;t actually saved.<p>The real solution is for people to upskill and learn new abilities so they can thrive in the new economic reality. But it&#x27;s hard to convince people that they need to change instead of expecting the world around them to stay the same.
        • smallmancontrov43 minutes ago
          This presupposes the existence of said jobs, which is a whopper of an assumption that conveniently shifts blame onto the most vulnerable. Of course, that&#x27;s probably the point.<p>This will work even worse than &quot;if everyone goes to college, good jobs will appear for everyone.&quot;
          • phyzix576112 minutes ago
            The good (or bad) thing about humans is they always want more than what they have. AI seems like a nice tool that may solve some problems for people but, in the very near future, customers will demand more than what AI can do and companies will need to hire people who can deliver more until those jobs, eventually like all jobs, are automated away. We see this happen every 50 years or so in society. Just have a conversation with people your grandparent&#x27;s age and you&#x27;ll see they&#x27;ve gone through the same thing several times.
        • plastic-enjoyer1 hour ago
          &gt; The real solution is for people to upskill and learn new abilities so they can thrive in the new economic reality. But it&#x27;s hard to convince people that they need to change instead of expecting the world around them to stay the same.<p>But why do I have to? Why should your life be dictated by the market and corporations that are pushing these changes? Why do I have to be afraid that my livelihood is at risk because I don&#x27;t want to adapt to the ever faster changing market? The goal of automation and AI should be to reduce or even eliminate the need for us to work, and not the further reduction of people to their economic value.
          • phyzix576115 minutes ago
            Because the world, sadly, doesn&#x27;t revolve around just 1 individual. We are a society where other individuals have different goals and needs and when those are met by the development of a new product offering it shifts how people act and where they spend their money. If enough people shift then it affects jobs.
      • jillesvangurp1 hour ago
        It&#x27;s less about who is right and more about economic interests and lobbying power. There&#x27;s a vocal minority that is just dead set against AI using all sorts of arguments related to religion, morality, fears about mass unemployment, all sorts of doom scenarios, etc. However, this is a minority with not a lot of lobbying power ultimately. And the louder they are and the less of this stuff actually materializes the easier it becomes to dismiss a lot of the arguments. Despite the loudness of the debate, the consensus is nowhere near as broad on this as it may seem to some.<p>And the quality of the debate remains very low as well. Most people barely understand the issues. And that includes many journalists that are still getting hung up on the whole &quot;hallucinations can be funny&quot; thing mostly. There are a lot of confused people spouting nonsense on this topic.<p>There are special interest groups with lobbying powers. Media companies with intellectual properties, actors worried about being impersonated, etc. Those have some ability to lobby for changes. And then you have the wider public that isn&#x27;t that well informed and has sort of caught on to the notion that chat gpt is now definitely a thing that is sometimes mildly useful.<p>And there are the AI companies that are definitely very well funded and have an enormous amount of lobbying power. They can move whole economies with their spending so they are getting relatively little push back from politicians. Political Washington and California run on obscene amounts of lobbying money. And the AI companies can provide a lot of that.
      • ulfw49 minutes ago
        &quot;Stop AI from taking our jobs&quot; - This shouldn&#x27;t be solved through regulation. It&#x27;s on politicians to help people adapt to a new economic reality, not to artificially preserve bullshit jobs.<p>So politicians are supposed to create &quot;non bullshit&quot; jobs out of thin air?<p>The job you&#x27;ve done for decades is suddenly bullshit because some shit LLM is hallucinating nice sounding words?
        • piva0028 minutes ago
          They do create bullshit jobs in finance by propping up the system when it&#x27;s about to collapse from the consequences of their own actions though.<p>Not that I believe they should allow the financial system to collapse without intervention but the interventions during recent crises have been done to save corporations that should have been extinguished instead of the common people who were affected by their consequences.<p>Which I believe is what&#x27;s lacking in the whole discussion, politicians shouldn&#x27;t be trying to maintain the labour status quo if&#x2F;when AI change the landscape because that would be a distortion of reality but there needs to be some off-ramp, and direct help for people who will suffer from the change in landscape without going through the bullshit of helping companies in the hopes they eventually help people. As many in HN say, companies are not charities, if they can make an extra buck by fucking someone they will do it, the government is supposed to be helping people as a collective.
      • piva002 hours ago
        &gt; &quot;Stop the IP theft&quot; - This feels like a cause pushed primarily by the 1%. Let&#x27;s be realistic: 99% of people don&#x27;t own patents and have little stake in strengthening IP protections.<p>Artists are not primarily in the 1% though, it&#x27;s not only patents that are IP theft.
    • _spduchamp1 hour ago
      Algorithmic Accountability. Not just for AI, but also social media, advertising, voting systems, etc. Algorithm Impact Assessments need to become mandatory.
    • hobom40 minutes ago
      There are several concrete proposals to regulate AI either proposed or passed. The most recent prominent example of a passed law is California SB53, whose summary you can read here: <a href="https:&#x2F;&#x2F;carnegieendowment.org&#x2F;emissary&#x2F;2025&#x2F;10&#x2F;california-sb-53-frontier-ai-law-what-it-does?lang=en" rel="nofollow">https:&#x2F;&#x2F;carnegieendowment.org&#x2F;emissary&#x2F;2025&#x2F;10&#x2F;california-sb...</a>
    • dist-epoch2 hours ago
      Elon Musk explicitly said in his latest Joe Rogan appearance that he advocates for the smallest government possible - just army, police, legal. He did NOT mention social care, health care.<p>Doesn&#x27;t quite align with UBI, unless he envisions the AI companies directly giving the UBI to people (when did that ever happen?)
      • titanomachy1 hour ago
        It&#x27;s possible that the interests of the richest man in the world don&#x27;t align with the interests of the majority, or society as a whole.
      • ToucanLoucan1 hour ago
        Like every other self-serving rich “Libertarian,” they want a small government when it stands to get in their way, and a large one when they want their lifestyle subsidized by government contracts.
      • disgruntledphd21 hour ago
        &gt; Elon Musk explicitly said in his latest Joe Rogan appearance that he advocates for the smallest government possible - just army, police, legal. He did NOT mention social care, health care.<p>This would be a 19th century government, just the &quot;regalian&quot; functions. It&#x27;s not really plausible in a world where most of the population who benefit from the health&#x2F;social care&#x2F;education functions can vote.
  • dist-epoch2 hours ago
    But are they really the ones in control?<p>It&#x27;s not the tech titans, it&#x27;s Capitalism itself building the war chest to ensure it&#x27;s embodiment and transfer into its next host - machines.<p>We are just it&#x27;s temporary vehicles.<p>&gt; “This is because what appears to humanity as the history of capitalism is an invasion from the future by an artificial intelligent space that must assemble itself entirely from its enemy&#x27;s resources.”
    • faidit2 hours ago
      Yes, these decisions are being made by flesh-and-blood humans at the top of a social pyramid. Nick Land&#x27;s deranged (and often racist) word-salad sci-fi fantasies tend to obfuscate that. If robots turn on their creators and wipe out humanity then whatever remains wouldn&#x27;t be a class society or a market economy of humans any more, hence no longer the social system known as capitalism by any common definition.
      • dist-epoch1 hour ago
        If there is more than one AI remaining, they will have some sort of an economy between them.
        • faidit27 minutes ago
          I mean, that could be drone swarms blasting each other to bits for control of what remains of the earth&#x27;s charred surface though. That wouldn&#x27;t be capitalism any more than dinosaurs eating each other. I don&#x27;t see post-human AI selling goods to consumers and prices being set through competition.
    • jrflowers2 hours ago
      &gt;We are just it&#x27;s temporary vehicles.<p>&gt; “This is because what appears to humanity as the history of capitalism is an invasion from the future by an artificial intelligent space that must assemble itself entirely from its enemy&#x27;s resources.”<p>I see your “roko’s basilisk is real” and counter with “slenderman locked it in the backrooms and it got sucked up by goatse” in this creepypasta-is-real conversation
      • Cthulhu_1 hour ago
        I for one welcome our new AI overlords.<p>(disclaimer: I don&#x27;t actually, I&#x27;m just memeing. I don&#x27;t think we&#x27;ll get AI overlords unless someone actively puts AI in charge and in control of both people (= people following directions from AI, which already happens, e.g. ChatGPT making suggestions), military hardware, and the entire chain of command in between.)
        • jrflowers1 hour ago
          Literally no one on earth is trying to make an AI overlord that’s an AI. There’s like a handful of dudes that think that if they can shove their stupid AI into enough shit then they can call themselves AI overlords.
  • metalman2 hours ago
    [flagged]
  • conartist62 hours ago
    God forbid we protect people from the theft machine
    • __MatrixMan__2 hours ago
      There&#x27;s a lot of problems with AI that need some carefully thought out regulation, but infringing on rights granted by IP law still isn&#x27;t theft.
      • jasonsb2 hours ago
        Agreed. Regulate AI? Sure, though I have zero faith politicians will do it competently. But more IP protection? Hard pass. I&#x27;d rather abolish patents.
        • TheAceOfHearts2 hours ago
          I think one of the key issues is that most of these discussions are happening at too high of an abstraction level. Could you give some specific examples of AI regulations that you think would be good? If we actually start elevating and refining key talking points that define the direction in which we want things to go, they will actually have a chance to spread.<p>Speaking of IP, I&#x27;d like to see some major copyright reform. Maybe bring down the duration to the original 14 years, and expand fair use. When copyright lasts so long, one of the key components for cultural evolution and iteration is severely hampered and slowed down. The rate at which culture evolves is going to continue accelerating, and we need our laws to catch up and adapt.
          • jasonsb1 hour ago
            &gt; Could you give some specific examples of AI regulations that you think would be good?<p>Sure, I can give you some examples:<p>- deceiving someone into thinking they&#x27;re talking to a human should be a felony (prison time, no exceptions for corporations)<p>- ban government&#x2F;law-enforcement use of AI for surveillance, predictive policing or automated sentencing<p>- no closed-source AI allowed in any public institution (schools, hospitals, courts...)<p>- no selling or renting paid AI products to anyone under 16 (free tools only)
            • j16sdiz1 hour ago
              &gt; - deceiving someone into thinking they&#x27;re talking to a human<p>This is gonna be as enforceable as the CANSPAM act. (i.e. you will get a few big cases, but it&#x27;s nothing compared to the overall situation)<p>How do you proof it in court? Do we need to record all private conversations?
      • faidit2 hours ago
        It&#x27;s theft. But not all IP theft, or theft in general, is morally equivalent. A poor person stealing a loaf of bread or pirating a movie they couldn&#x27;t afford is just. A corrupt elite stealing poor farmers&#x27; food or stealing content from small struggling creators is not.
        • jasonsb2 hours ago
          Ask yourself: who owns the IP you&#x27;re defending? It&#x27;s not struggling artists, it&#x27;s corporations and billionaires.<p>Stricter IP laws won&#x27;t slow down closed-source models with armies of lawyers. They&#x27;ll just kill open-source alternatives.
          • Cthulhu_2 hours ago
            Under copyright laws, if HN&#x27;s T&#x27;s &amp; C&#x27;s didn&#x27;t override it, anything I write and have written on HN is <i>my</i> IP. And the AI data hoarders used it to train their stuff.
            • jasonsb1 hour ago
              Let&#x27;s meet in the middle: only allow AI data hoarders to train their stuff on your content if the model is open source. I can stand behind that.
          • faidit2 hours ago
            I never advocated &quot;stricter IP laws&quot;. I would however point out the contradiction between current IP laws being enforced against kids using BitTorrent while unenforced against billionaires and their AI ventures, despite them committing IP theft on a far grander scale.
          • fzeroracer1 hour ago
            How do you expect open source alternatives to exist when they cannot enforce how you use their IP? Open source licenses exist and are enforced under IP law. This is part of the reason why AI companies have been pushing hard for IP reform because they to decimate IP laws for thee but not for me.