> I cannot help but feel that discussing this topic under the blanket term "AI Regulation" is a bit deceptive. I've noticed that whenever this topic comes up, almost every major figure remains rather vague on the details. Who are some influential figures actually advancing clearly defined regulations or key ideas for approaching how we should think about AI regulation?<p>There's a vocal minority calling for AI regulation, but what they actually want often strikes me as misguided:<p>"Stop AI from taking our jobs" - This shouldn't be solved through regulation. It's on politicians to help people adapt to a new economic reality, not to artificially preserve bullshit jobs.<p>"Stop the IP theft" - This feels like a cause pushed primarily by the 1%. Let's be realistic: 99% of people don't own patents and have little stake in strengthening IP protections.
> This shouldn't be solved through regulation. It's on politicians to help people adapt to a new economic reality, not to artificially preserve bullshit jobs.<p>They already do this[1]. Why should there be an exception carved out for AI type jobs?<p>------------------------------<p>[1] What do you think tariffs are? Show me a country without tariffs and I've show you a broken economy with widespread starvation and misery.
> "Stop AI from taking our jobs" - This shouldn't be solved through regulation. It's on politicians to help people adapt to a new economic reality, not to artificially preserve bullshit jobs.<p>This is a really good point. If a country tries to "protect" jobs by blocking AI, it only puts itself at a disadvantage. Other countries that don't pass those restrictions will produce goods and services more efficiently and at lower cost, and they’ll outcompete you anyway. So even with regulations the jobs aren't actually saved.<p>The real solution is for people to upskill and learn new abilities so they can thrive in the new economic reality. But it's hard to convince people that they need to change instead of expecting the world around them to stay the same.
This presupposes the existence of said jobs, which is a whopper of an assumption that conveniently shifts blame onto the most vulnerable. Of course, that's probably the point.<p>This will work even worse than "if everyone goes to college, good jobs will appear for everyone."
The good (or bad) thing about humans is they always want more than what they have. AI seems like a nice tool that may solve some problems for people but, in the very near future, customers will demand more than what AI can do and companies will need to hire people who can deliver more until those jobs, eventually like all jobs, are automated away. We see this happen every 50 years or so in society. Just have a conversation with people your grandparent's age and you'll see they've gone through the same thing several times.
> The real solution is for people to upskill and learn new abilities so they can thrive in the new economic reality. But it's hard to convince people that they need to change instead of expecting the world around them to stay the same.<p>But why do I have to? Why should your life be dictated by the market and corporations that are pushing these changes? Why do I have to be afraid that my livelihood is at risk because I don't want to adapt to the ever faster changing market? The goal of automation and AI should be to reduce or even eliminate the need for us to work, and not the further reduction of people to their economic value.
Because the world, sadly, doesn't revolve around just 1 individual. We are a society where other individuals have different goals and needs and when those are met by the development of a new product offering it shifts how people act and where they spend their money. If enough people shift then it affects jobs.
It's less about who is right and more about economic interests and lobbying power. There's a vocal minority that is just dead set against AI using all sorts of arguments related to religion, morality, fears about mass unemployment, all sorts of doom scenarios, etc. However, this is a minority with not a lot of lobbying power ultimately. And the louder they are and the less of this stuff actually materializes the easier it becomes to dismiss a lot of the arguments. Despite the loudness of the debate, the consensus is nowhere near as broad on this as it may seem to some.<p>And the quality of the debate remains very low as well. Most people barely understand the issues. And that includes many journalists that are still getting hung up on the whole "hallucinations can be funny" thing mostly. There are a lot of confused people spouting nonsense on this topic.<p>There are special interest groups with lobbying powers. Media companies with intellectual properties, actors worried about being impersonated, etc. Those have some ability to lobby for changes. And then you have the wider public that isn't that well informed and has sort of caught on to the notion that chat gpt is now definitely a thing that is sometimes mildly useful.<p>And there are the AI companies that are definitely very well funded and have an enormous amount of lobbying power. They can move whole economies with their spending so they are getting relatively little push back from politicians. Political Washington and California run on obscene amounts of lobbying money. And the AI companies can provide a lot of that.
"Stop AI from taking our jobs" - This shouldn't be solved through regulation. It's on politicians to help people adapt to a new economic reality, not to artificially preserve bullshit jobs.<p>So politicians are supposed to create "non bullshit" jobs out of thin air?<p>The job you've done for decades is suddenly bullshit because some shit LLM is hallucinating nice sounding words?
They do create bullshit jobs in finance by propping up the system when it's about to collapse from the consequences of their own actions though.<p>Not that I believe they should allow the financial system to collapse without intervention but the interventions during recent crises have been done to save corporations that should have been extinguished instead of the common people who were affected by their consequences.<p>Which I believe is what's lacking in the whole discussion, politicians shouldn't be trying to maintain the labour status quo if/when AI change the landscape because that would be a distortion of reality but there needs to be some off-ramp, and direct help for people who will suffer from the change in landscape without going through the bullshit of helping companies in the hopes they eventually help people. As many in HN say, companies are not charities, if they can make an extra buck by fucking someone they will do it, the government is supposed to be helping people as a collective.
> "Stop the IP theft" - This feels like a cause pushed primarily by the 1%. Let's be realistic: 99% of people don't own patents and have little stake in strengthening IP protections.<p>Artists are not primarily in the 1% though, it's not only patents that are IP theft.