Given the lack of interest in the industry “self” regulating, and/or taking responsibility of, the content; what other option is there. It seems there’s little interest globally.<p>With my direct and indirect experiences of social media; I strongly support this.<p>That said; how does a young individual get updates to public transport outages that are only available via twitter/x, or read the menu of the local cafe that is only posted on Facebook?<p>I do worry about the implementation, especially if government owned. The government has, in the past, said one thing and executed another. (DNS metadata collection for ISP’s, for example) Whilst I have nothing to hide, and am happy to be entirely transparent with them; I can appreciate, respect, and understand the hesitation.<p>And, if government owned; how long until it’s “privatised”.<p>Will be interesting to see how this plays out.
There is zero willingness by Meta and others to even follow through on their own guidelines or government requirements. I think it's time a hard decision was made about the harms spread by these networks.<p>I am definitely not alone in submitting a report about a fake profile only for the system to nearly automatically deny it. Even when the real person being impersonated is literally sitting next to me asking me to help them submit the report. Even illegal drug dealers operate in the open on Meta properties with no recourse whatsoever.<p>The process of eliminating a huge swath of fake and harmful content could be implemented trivially, we have so many ways of muting or limiting the spread of information which is unvetted, dubious origin, has outlier qualities and so on - yet nothing of the type is engaged by these networks to obvious harmful consequences.
Thanks for your reply.<p>On a similar note; a family member was being exposed to intense and violent content over Facebook. There was no block, ignore, or report feature on the content exposed. The only option we could find, after researching it; was a “show less of this content” setting buried deep in the Facebook web app (not even available on the standalone app).<p>Honestly; if the account owner has little option to manage the content they are exposed to.. ugh!
Your experience brings about an excellent point (and something that is coming up in discussion frequently with the proposed under-16 social media ban in Australia - a ban that is being supported by all sides of their government.)<p>That point is that even the user making positive attempts to moderate their experience on the platform is futile and in my experience largely ignored. (I actually have the experience that it then shows me more of that content.)<p>Social media platforms are keenly aware that anger and fear drive significantly more engagement and whistleblowers have detailed how Facebook prioritises this content, shovelling it to users specifically to drive usage (and with that ad value and ad impressions).(1)(2)(3)(4)<p>There is a deep commercial incentive for social media networks to act against their own "community guidelines" and legislation. I applaud Australia's direction for recognising that these networks are not acting in good faith and introducing measures that address the proven harms.(5)<p>1. <a href="https://www.cbsnews.com/news/facebook-whistleblower-frances-haugen-60-minutes-polarizing-divisive-content/" rel="nofollow">https://www.cbsnews.com/news/facebook-whistleblower-frances-...</a><p>2. <a href="https://thehill.com/changing-america/enrichment/arts-culture/578724-5-points-for-anger-1-for-a-like-how-facebooks/" rel="nofollow">https://thehill.com/changing-america/enrichment/arts-culture...</a><p>3. <a href="https://time.com/6103645/facebook-whistleblower-frances-haugen/" rel="nofollow">https://time.com/6103645/facebook-whistleblower-frances-haug...</a><p>4. <a href="https://time.com/6097704/facebook-instagram-wall-street-journal/" rel="nofollow">https://time.com/6097704/facebook-instagram-wall-street-jour...</a><p>5. <a href="https://www.wsj.com/articles/the-facebook-files-11631713039" rel="nofollow">https://www.wsj.com/articles/the-facebook-files-11631713039</a>
> Given the lack of interest in the industry “self” regulating, and/or taking responsibility of, the content; what other option is there<p>This is surely an important point. People often make the argument of individual freedom. But at the same time, evidently we are excellent at using those freedoms to screw ourselves over. Globally, we've been speedrunning fucking up society in critical areas for decades now. Could the solution be less freedom? Is there some hidden hook whereby more freedom can solve everything?
> Globally, we've been speedrunning fucking up society in critical areas for decades now.<p>Social media has also been enormously beneficial in terms of crippling the propaganda power of centralized, commercial media. It would be very bad to simply return to the authority of editorial boards. What we actually need is to grapple with the social responsibility that comes with this power, which could take decades or even centuries of living with the internet to wrangle.<p>Especially now that we know how little of the world traditional newsrooms are even willing to cover, let alone fund coverage of.<p>Besides, the cat is out of the bag.
Not sure I agree with this entirely, especially the points on media. And I don’t think it’s been beneficial at all.<p>I’d argue the problem here is more so quality, and can really only be solved with regulation. I want to read news, not a blog and opinion.<p>There should be clear and concise standards media outlets need to adhere to, but as per my suggestion in regards to social media; will not self regulate.<p>Additionally; there is no accountability and responsibility.<p>I would argue that the only benefit that has been made is making this more apparent and obvious.. and hopefully for the better.
> There should be clear and concise standards media outlets need to adhere to<p>Yea that's be great but it's never going to happen, and if it did happen you wouldn't like it. It's like people wanting unbiased journalism: that only exists in the minds of the people who think there are only two serious opinions to have on any topic.
It does happen - sometimes get we unbiased journalism. Just not from 'professional' journalists.<p>In response to an article posted in the global news thread on reddit, someone who lives there and actually has local knowledge and context can click 'reply' and explain what's actually going on and contradict/correct what the clickbaity sensationalist mass media article contains.<p>For example, I live here, the government is passing trying to pass said law that they can't control the implementation of. It's also seen by some as a surveillance state move under the guise of "won't someone think of the children". Under the proposed rules, theoretically people < 16 won't even be allowed to text one another.<p>More in depth/informative article from a local source, not Reuters:<p>[1] <a href="https://www.thenewdaily.com.au/news/politics/australian-politics/2024/11/07/social-media-ban-albanese" rel="nofollow">https://www.thenewdaily.com.au/news/politics/australian-poli...</a>
That link seems like the opposite of unbiased journalism, and also the title quotes someone as having called it a <i>‘deeply flawed plan’</i> but upon closer inspection, seems to be quoting the opinion of the <i>author themself!</i>.
At a fundamental aspect, that's what journalism is though - some individual noticing something and writing down what they percieved, its always biased. To think otherwise is ludicrous.<p>The linked article is way more informative and in depth compared to the link in the original HN post.
Social media could include everything including forums and websites were people can leave a comment.<p>I know that some people here on HN want to go full Unabomber and live in the woods but I kind of like the internet.
> Besides, the cat is out of the bag.<p>That's what I was thinking too.<p>I am not sure what the value is in severely regulating half a dozen or so companies when work arounds are so easily to implement. Maybe as a stop gap solution while we figure out long term solutions (which the government has a horrible track record on).<p>But for any long term solution we would first have to define what social media even is, and in a way that's testable in court. Don't run away from the hard things, but wow, that's hard.
Your point is very valid. I know I would certainly have less awareness of global topics if social media didn't exist (and I don't even use it that much). Social media is definitely contributing to mental health issues too.
Do you think social media is net positive? Or do you think there is an alternative method by which the negative effect can be mitigated?
>Social media has also been enormously beneficial in terms of crippling the propaganda power of centralized, commercial media<p>This is just plain wrong. Social media has <i>moved</i> the propaganda power to foreign hostile nations on a golden platter. That is not an improvement.
> Social media has moved the propaganda power to foreign hostile nations on a golden platter.<p>I mean it's not like the power of domestic propaganda has <i>waned</i>, it's just in a war for the attention of the ignorant with other interests (most of which aren't foreign powers, by the way, but simply capital). Social media that enabled independent coverage and discussion is still there for the adults in the room and it would be a true loss for any society to sweep the rug out from under the people who care to look <i>beyond</i> the for-profit newsroom (i.e., almost all media that's readily accessible at least to americans).
> Social media that enabled independent coverage and discussion is still there for the adults in the room<p>I'm not actually sure that's true. The only reliable sources on social media (in the sense of 'usually not horribly wrong') are actually traditional media companies like bbc, guardian. Perhaps I'm holding it wrong, but finding other trustworthy sources is actually really hard...
If there is one single reliable fact that I wish was repeated more often in this kind of discussions is that news papers and media companies will cater to the demographics of their core audience. Left, right, center, young, old, male, female, city, rural, and so on. Consumers will self select towards publishers which they agree with, and publishers will cater to that which more reliable generates revenue and views.<p>Meta studies often illustrate this well, but one can also do this on an individual level. Seek up reliable sources which opposing core audience compared to ones own demographic and it becomes obvious how horribly wrong they seem with miss leading statements, omissions, weasel words, emotional labeled meanings, mixed with with straight falsehoods.<p>The only trustworthy sources are multiple sources, and even then we are likely to fail since we are going to be searching for confirmation of what we already believe to be true. Social media do not usually help here, through Wikipedia seems to be a fairly good starting point (especially the talk pages are good to see where different reliable sources disagrees the most).
This is especially untrue <i>today</i>, when reaching a wide, relevant audience requires significant resources, including financial -- either outright paying for sponsored places at the top of people's timelines, or paying for the expertise that delivers the right content to the right people in the right formulation.<p>There's obviously still room for exceptions (especially in small niches, where individual content creators can still make a dent) but this isn't 2006 anymore. The vast majority of the content that covers social, political or economic issues on social media platforms is paid for and pushed by directly interested parties (political parties, companies etc.) with ample funding and is often <i>part</i> of campaigns that span both social media and traditional media. The "indie" outlook is part of the packaging.<p>The terms of the "paid by" disclaimers are sufficiently generous that they're all but useless once you get past things like goodie bags for influencers or regulated campaign ads.
> Perhaps I'm holding it wrong, but finding other trustworthy sources is actually really hard...<p>Yes, it is hard, but there's no such thing as easy trust, and you should always be questioning whether such trust is actually earned or just comforting.<p>I don't think the "bbc" or "the guardian" are very trustworthy, either (at least by themselves)—both have obvious polemics and blind spots. They also only cover a very narrow, western-centric view of the world, leaving you with piss-poor understanding of world politics. I'm not saying you should <i>ignore</i> them but they're still propaganda.<p>Substack is invaluable; blogs are invaluable; twitter, as miserable as it is, is invaluable (for direct access to reporters sans newsrooms, if nothing else).
There is some truth to this crippling power and I don’t doubt that there are examples of it. But social media has been the vector for massive amounts of propaganda. Sorry to say I’d rather just have the commercialized editorial boards. At least that’s a single problem that can be reigned in. Instead we have the worst of both worlds - commercialized media plus a million-headed hydra spewing falsehoods and nonsense.
> At least that’s a single problem that can be reigned in.<p>I don't understand what you're referring to. How do you recommend reigning in a newsroom? Especially one beholden to owners and advertisers with interests directly opposed to those of readers? How do you as a consumer opt out of the financial barriers to quality reporting? The only answer is the peer to peer nature of social media and the internet.<p>Perhaps what you're missing is that traditional media has zero incentive to highlight the positives of direct communication between disparate populations, creating farcically-negative hysteria about the dangers of worldviews not beholden to giant interests (yes, including domestic and foreign state powers, but also individual entities with massive capital to throw around).
>Sorry to say I’d rather just have the commercialized editorial boards<p>Not me. Having a solid experience living in some far-from-democracy countries, I can state with all certainty that social platforms opened at least a crack to alternative opinions for a lot of people. Yes, they are full of propaganda, but I think they still provide more pluralistic picture compared to the world where old media ruled supreme. The real problem with those platforms is not "misinformation" (which will be everywhere anyway), buy addictiveness, and the fact they incentivize aggressive tribalism
History should have taught us that the answer is no, the solution is not less freedom.<p>It does make me want to watch the movie Equilibrium though.
>That said; how does a young individual get updates to public transport outages that are only available via twitter/x, or read the menu of the local cafe that is only posted on Facebook?<p>If such regulations come in to effect, I think those business / institutions will adapt (eventually) to cater to communicating via additional channels.
> how does a young individual get updates to public transport outages that are only available via twitter/x, or read the menu of the local cafe that is only posted on Facebook?<p>Solutions will appear naturally to fill the gaps. This is not rocket science.
And indeed, solutions have already appeared: RSS
Agree. Like, local cafe, don't do that.
> That said; how does a young individual get updates to public transport outages that are only available via twitter/x, or read the menu of the local cafe that is only posted on Facebook?<p>Probably in a similar way to how they did prior to 2010
> I do worry about the implementation<p>The government does not own the implementation.<p>As mentioned in another comment, simply making this illegal would create a significant incentive for soc media companies to implement the solution.<p>The onus is on them to respect the law.<p>They've been slapped in the face in EU court enough now that they'd think about it seriously.
> That said; how does a young individual get updates to public transport outages that are only available via twitter/x, or read the menu of the local cafe that is only posted on Facebook?<p>I have never needed either of these so maybe those kids will manage too?<p>And especially in the public transport case that information should really be made available outside of private platforms in any case.
I need these multiple times every day, so maybe not. And this information simply isn't available elsewhere, regardless of what we think about it - in many cases it's not official but crowd sourced in a FB group, it's not a matter of simply publishing news articles on a different site.
I don't think this is a practical solution. And it does not solve the underlying issue, which is the attention economy.<p>Here's a better solution option that is easier to implement; even adults can benefit, and I think it solves some of the problems:<p>1. Have an easy option to turn off feeds and enforce for non-adults. This would apply not just to meta, twitter, but also to Youtube, LinkedIn, etc.<p>2. Disable like display. The like counts are what hooks people and gives the dopamine kick. Add the ability to hide it and not show for under 18 easily.<p>3. Social and news sites should not be allowed to send notifications, period. Not on phones or browsers, at least not for those under 18.<p>Something along these lines would improve social media for everyone, not just kids. Parents' mental health affects kids the same. So blocking it just for kids only goes so far.
Ah "taking responsibility for content", that classic weasel phrase for "Do what I specifically want or else." While disregarding the sheer difficulty of moderation at scale. They want a return to a "broadcast" information model and just refuse to admit it.
The law could force a "for kids" version like youtube, and businesses would automatically start there unless they were unsuitable.
"The spying industry isn't spying on people enough, so the government has no choice but to force them to spy harder"
As usual, the problem with this is that it assumes a way to perfectly identify somebody on the internet, which in turns mean a way to perfectly identify, in real time, somebody carrying permanently a tracking device with GPS, microphone and camera.<p>It's crazy that all the things we considered the worst of dystopia in the 80's, thinking nobody would be stupid enough to do, and that those societies in SF books were only distant fictions, are things we are actively seeking now.<p>Things like "Find my" and "air tags" are already beloved my millions, people use it to track loved ones and they swear by it. Even very intelligent, educated people.<p>There is such a cognitive dissonance between people swearing the last election meant a likely dictatorship and the same people setting up a tech rope around their necks in case a dictatorship does happen.<p>My now-dead Jewish grandfather met my grandmother during the French occupation because she was making fake papers. He would be horrified if he knew what we are doing right now with our data.<p>My German ex was born in East Germany, 11 years before the fall of the Berlin Wall. She thinks people are mad to believe that tracking is not going to be abused.<p>What the hell is going on?
SocMed / AdTech cos. can already ID and target users with precision. Knowing someone's pregnant before their family does, per the canonical example from 2012.[1]<p>Impose an insanely steep tax (say, 10,000%) on all ads revenue tied to underage individuals.<p>Now it's possible to pursue the Al Capone prosecution: tax fraud.<p>SocMed cos. will avoid anyone underage like the plauge.<p>________________________________<p>Notes:<p>1. "How Target Figured Out A Teen Girl Was Pregnant Before Her Father Did" <<a href="https://www.forbes.com/sites/kashmirhill/2012/02/16/how-target-figured-out-a-teen-girl-was-pregnant-before-her-father-did/" rel="nofollow">https://www.forbes.com/sites/kashmirhill/2012/02/16/how-targ...</a>>, discussed on HN at the time: <<a href="https://news.ycombinator.com/item?id=3601354">https://news.ycombinator.com/item?id=3601354</a>> (17 Feb 2012).
Given my experience with ads today is that they frequently get my gender, language, nationality, and country of residence wrong (and in mutually incompatible ways), I suspect that the famous pregnancy story was dumb luck rather than a reflection of quality.<p>But yes, the Al Capone solution seems like it would work.
They don’t have to be right all the time, just enough to make ROI positive.
For their own needs, yes. For the purposes of satisfying the goals of government regulations, no.
I don't mind them having a positive ROI. I do mind them spying on me and my children, even if they draw incorrect conclusions about what ads interest us.
I realize this is the prevailing wisdom but it is short sighted and simplistic because it ignores opportunity cost.
More on mechanism (and why <i>specific</i> mechanisms should not be <i>legislated</i>) here:<p><<a href="https://news.ycombinator.com/item?id=42075146">https://news.ycombinator.com/item?id=42075146</a>>
<a href="https://medium.com/@colin.fraser/target-didnt-figure-out-a-teen-girl-was-pregnant-before-her-father-did-a6be13b973a5" rel="nofollow">https://medium.com/@colin.fraser/target-didnt-figure-out-a-t...</a>
Rather than tax, I would just like to see data on how many Ads kids are being shown in a day. And whether its increasing year on year.
> SocMed / AdTech cos. can already ID and target users with precision.<p>Not reliably so, and not people who are deliberately trying to fake the system.<p>Looking at the ads FB are showing me, they are not well targetted at all. I can see an add about AI in education that seems to be aimed at teachers (not a teacher), one from a charity aimed at people with spare rooms that could be used to accommodate human trafficking survivors (I do not have a spare room), a summer school for kids who might do a history related subject at university (that one is well targetted), one advertising admin jobs at British intelligence agencies (I only just meet security requirements and I do not have relevant experience). I get shown multiple ads aimed at Muslims (I am not a Muslim). In the past I have been shown ads for Judaism GCSE courses (not Jewish either, and my kids did not do any religion GCSEs).
As a counter to this: advertisers not knowing how to use the targeting options available to them != the platform can't target well.<p>Saying that though, it does seem like as soon as I click one advert (either out of curiosity, or to waste obvious scam advertisers budget), I'm suddenly inundated with similar adverts.<p>Worse I've clicked an ad, bought a product and <i>still</i> seen adverts for the same product from the vendor after the fact. I'm pretty sure there are ways for vendors to preclude such wasteful ad spend.
> <i>As a counter to this: advertisers not knowing how to use the targeting options available to them != the platform can't target well.</i><p>Also, it's not like the platform is going to give advertisers access to its full capabilities. For one, they don't need to - they only need to sell the minimum it takes to keep the advertisers on board. Two, they wouldn't want some clever advertisers to extract juicy data indirectly and cut out the middleman.
> some clever advertisers to extract juicy data indirectly and cut out the middleman<p>What am I missing here? Don't you still need the platform to actually advertise on? How could I extract data and use that to advertise to someone?
Wut?<p>Better capabilities for customers, they reach more customers, more revenue = they may increase ad spending because they see it working
Yes, but you want to minimize the "better" part so you get the extra revenue without losing your competitive advantage.
it's also helpful to extract all the value at the current cost before incurring more costs.
And yet Netflix, Amazon, etc recommend movies and shows I’ve watched in the last 6 months. They literally know who I am and what I’ve watched and can’t/won’t build a basic recommendation engine that includes “don’t recommend things watched in the last X months”.<p>Maybe there’s a reason they want me to rewatch things, but I find it extremely annoying to have the first rows of recommendations be things I watched recently.
This might make sense given the priors. e.g. someone who's hasn't watched Star Wars in the past 6 months might be less likely to watch and enjoy Star Wars than someone who's already watched Star Wars in the past 6 months.
That is all but certainly more a sign of enshittification than of technical inability to implement that function.
> As usual, the problem with this is that it assumes a way to perfectly identify somebody on the internet<p>It does not require this, and in fact the solution may be almost entirely outside of the technology realm.<p>Simply making it illegal would be enough to:<p>1. Deter a lot of people from enabling social media on the smartphone of their kids<p>2. Enable a lot of parents to tell their kids that what they are doing is illegal<p>3. Force BigTech to implement whatever solution fits best to control the age of the user before install. The onus to respect the law is on them.
Exactly, if cigarettes were currently available at any age, and it was proposed to be limited to 18+, you'd get people in here decrying the law, saying it's completely unworkable to prevent absolutely every under age person ever from consuming tobacco, and therefore we shouldn't limit it at all.<p>It's a very technical misunderstanding of how law works, or the effects law have on society. Legality isn't technicality. Something being physically possible doesn't "break" the law, and breaking a law doesn't render that law useless.
> Something being physically possible doesn't "break" the law, and breaking a law doesn't render that law useless.<p>No, but the ease of breaking the law does need to be weighed against the costs it imposes on everyone else. Carrying a driver's license is almost a non-cost, and is really only necessary til people hit 28 or so and cashiers largely stop checking.<p>Needing to verify age online could place very real costs on people. Enabling surveillance is an issue. The chilling effect of knowing that database leaks are going to tie your real identity to your online one is an issue (think the Ashley Madison leaks, but for your old angsty teenager Reddit account). There are a lot of small groups who don't want their membership revealed, like LGBT people in countries with laws against it, domestic abuse victims, support groups for people with trauma, etc, etc.<p>I don't think I've seen a system that doesn't involve those issues. I'm sure one is hypothetically possible, but that never seems to be a goal of these laws.<p>So the question is whether restricting social media use by age is worth those drawbacks. My personal values say it's not, but I do recognize that's a value call to some degree and views will differ.
Those are very valid but separate concerns.<p>You can make social media illegal for under 16s without requiring online age validation.<p>After all, a form of this already exists with the US COPPA [1] law, and the presence of that law hasn't forced age validation on everyone.<p>If the US can make it de-facto illegal for companies to let under 13s use social media, and has been the case for over 20 years, then it shouldn't be a sudden privacy-ending problem for Australia to make it illegal for under 16s to use social media.<p>[1] <a href="https://en.wikipedia.org/wiki/Children%27s_Online_Privacy_Protection_Act" rel="nofollow">https://en.wikipedia.org/wiki/Children%27s_Online_Privacy_Pr...</a>
Kids get around COPPA all the time. Everyone in my kid's middle school knows that you have to simply lie about your age when you sign up for an Internet service, because most of them won't let you in if you are stupid enough to say you are under 13, and the services that let you in are nerfed to the point of uselessness.<p>Furthermore, now I have to navigate the morality of <i>encouraging my own kid to lie</i>, just so she can use an online service that I approve of as a parent. Terrible law.
Apples to oranges.<p>Piracy would be a good comparison to this law. We all know how that turned out.
Or CP.<p>Also we have to consider that gov spying programs are developed everywhere in the world, and the fact CP has been used as an excuse to pass questionable bills again and again.<p>What's even crazier is that we have to even argue this is a problem.<p>And that intelligent people are arguing that this is all OK. People that read history books, saw PRISM and Snowden, heard about cops abusing their system already, etc.
Piracy is also apples to oranges.<p>There aren’t big tech companies running piracy services.<p>You could put them under the rule of law if they did.
The government in question is literally working through a plan for a national online proof of age thingy in line with this.<p>When it comes through, theres like 10 other interest groups who want to immediately use it for their thing.
> whatever solution fits best to control the age of the user before install. The onus to respect the law is on them.<p>If it becomes commonplace, the iPhone could vouch for you about your age, without disclosing your identity.
> If it becomes commonplace, the iPhone could vouch for you about your age, without disclosing your identity.<p>No, it really couldn't. If it can't disclose your identity, then there's no mechanism to revoke a compromised or cloned identifier, and no ability to demonstrate in court that you performed the necessary diligence correctly.<p>Age verification systems must, inherently, either produce a secure proof of age with no genuine anonymity, or a chocolate teapot highly insecure proof of age that's no better than a "I am over 18" tickbox that isn't going to protect websites in court from these clumsy laws.
> no better than a "I am over 18" tickbox<p>Surely even a flag locked behind the "parental controls" section of an iPhone is much better than a tickbox.
Eh, companies would probably be permitted to keep using the systems since "it's the industry standard, there's nothing more we could've done", and it would just become another instance of regulatory capture.<p>See, e.g., the GDPR, which as written has strong language against storing data, but as applied is riddled with 'reasonable' exceptions that have never been tested in court. It's like Schrödinger's regulation, it's both incredibly restrictive and trivial to comply with (unless you're Super Evil™) at the same time, somehow.
> Simply making it illegal<p>The problem is, what is "it"? Are applications like WhatsApp and iMessage social media? They seem to be significant sources of teenage angst but they are also incredibly useful tools. What about Hacker News? Is it social media?<p>How do you prove it?
> As usual, the problem with this is that it assumes a way to perfectly identify somebody on the internet, which in turns mean a way to perfectly identify, in real time, somebody carrying permanently a tracking device with GPS, microphone and camera.<p>It's Australia, who often rank very low for a western country on human rights.<p><a href="https://www.amnesty.org/en/location/asia-and-the-pacific/south-east-asia-and-the-pacific/australia/report-australia/" rel="nofollow">https://www.amnesty.org/en/location/asia-and-the-pacific/sou...</a><p>Secret trials <a href="https://www.hrlc.org.au/news/2023/4/19/secret-trials-have-no-place-in-modern-australia-witness-js-sentencing-finally-revealed" rel="nofollow">https://www.hrlc.org.au/news/2023/4/19/secret-trials-have-no...</a><p>Secret laws <a href="https://www.theguardian.com/law/2023/mar/28/more-than-800-secrecy-laws-keeping-australian-government-information-from-the-public-paper-shows" rel="nofollow">https://www.theguardian.com/law/2023/mar/28/more-than-800-se...</a><p>Secret ministries <a href="https://www.theguardian.com/australia-news/2022/aug/19/scott-morrisons-secret-ministries-were-they-legal-and-what-happens-next" rel="nofollow">https://www.theguardian.com/australia-news/2022/aug/19/scott...</a><p>Secret backdoors <a href="https://www.schneier.com/blog/archives/2018/12/new_australian_.html" rel="nofollow">https://www.schneier.com/blog/archives/2018/12/new_australia...</a><p>Introducing Digital ID system that may be enabler for this <a href="https://www.oaic.gov.au/digital-id" rel="nofollow">https://www.oaic.gov.au/digital-id</a><p>While trying to introduce laws to weaken encryption an ex Australia prime minister famously said:<p>> "The laws of mathematics are very commendable, but the only law that applies in Australia is the law of Australia" Malcolm Turnbull.<p>What's crazy the west look at places such as China as dystopian with the great firewall etc, it appears China was just ahead of the game.<p>Australia is probably the easiest place to introduce a system to block social media, a experimentation ground for five eyes. Easy to push things through, protesting isn't an Australian pass time, not that big of a country, not that small of a country.
Not that I agreed with the person for doing it, but people got arrested for protesting lockdowns on social media over Facebook. I remember seeing a video of a self-proclaimed pregnant woman being handcuffed. I cant imagine why nobody protests in Australia, assuming this is the normal government response.
Australia has been a petri dish for the testing of totalitarian-authoritarian policies and populist dogma since its inception as a nation.<p>Never forget that the Western worlds first, most successful racist genocide occurred in Australia, and the same racist reasoning that allowed it to maintain White Australia policies into the 80's are still being used to justify the slaughter of innocent human beings all over the world.<p>When the USA wants something dirty done, in yet another illegal war, Australia is ready with hat in hand, willful and subservient, to commit yet more crimes against humanity and get away with it.<p>Australia is where the apparatus designed, with intent, to massively violate the human rights of over a billion human beings currently operates, at insane scales, every second of the day. Australians are fine with it.<p>(Disclaimer: White Stolen Generation Australian.)
Isn't it Israel where they test the technology (on Palestinians)?
>(Disclaimer: White Stolen Generation Australian.)<p>For those who are reading this thread in disbelief at the magnitude of Australia's dark history, this is a compelling read:<p><a href="https://research-repository.griffith.edu.au/items/02ba5454-5dda-5a71-a2ae-fd25d885c418" rel="nofollow">https://research-repository.griffith.edu.au/items/02ba5454-5...</a>
> Never forgetten that the Western worlds first, most successful racist genocide occurred in Australia<p>oh dear, have we forgotten about The Holocaust so soon?<p>Or are we just going to let slide mis-information on true historical events that don't need lying to relay the injustice, and support justice?<p>Btw we are <i>not</i> going to accept that you mistyped, Again.
There is no sane discussion to be had around the "stolen generation" (I don't believe it was white, they were black) due to the mass misinformation that exists, the situation is muddy and complex where the facts simply don't meet reality.
>mass misinformation<p>The White Stolen Generation is a real thing. If you don't know about it, you're not qualified to dismiss it:<p><a href="https://www.creativespirits.info/aboriginalculture/politics/stolen-generations/a-guide-to-australias-stolen-generations#the-white-stolen-generations" rel="nofollow">https://www.creativespirits.info/aboriginalculture/politics/...</a><p>(EDIT: please look at the URL, which includes the anchor directly to the section on the "White Stolen Generation" - for some reason when following the URL through HN, the jump to the anchor for that section is not followed, giving the impression that there is no "White Stolen Generation" section on the given website, when .. indeed .. there is.)<p>Those who don't want to confront this painful aspect of Australia's terrible history tend to want to muddy the waters, but those of us who actually suffered under the forced human trafficking regime that was official Australian government policy see things a lot more clearly...
[dead]
No, the tracking is irrelevant. The important point is to establish a shared cultural norm.<p>In the US, children under 13 are usually forbidden from having online accounts. Do some kids lie? For sure, but the sites will try to ban those accounts.<p>In practice, this legislation would be very positive because it would establish the expectation that young teens _are not on social media_. This only works if it's national legislation: No kid will feel left out because his/her friends are on SnapGramTok. They are all banned. They can be mad at the unfair government/adults together.
The government, and big tech companies, don’t need air tags or “find me” to track almost everyone, they have a mobile phone constantly chatting with local towers.<p>I don’t understand what is dystopian about knowing where I left my keys, and seeing how my wife’s commute home is doing without phoning her while she is driving, assuming we always both have mobile phones.
As you immerse yourself in technology, you hand over privacy, signals to act, and the framing of reality to an adversarial third party. It is an adversarial relationship as corporations and governments are not interested in your well being (if they appear to this is merely the 'candy' to draw you in) - they are interested in divesting you from your resources (time, money). If they are able to intermediate reality for us, the very means to experience a natural/unmanipulated mode of existence will be no longer be available.
What does any of that have to do with having “find my” enabled for my wife, as opposed to just owning an iPhone or android phone, which at this point almost all adults have, and give all their location data to billion dollar multinational corporations?
I think the difference is between cheering on the loss of privacy and recognising the loss.<p>The loss of privacy was something that was totally foreseeable (by me, at least) but embraced by the general population. The general embrace has meant that we all operate within a panopticon-like system - this itself modifies behaviour. No alternative choices are available - as a collective all have to 'follow the herd', with phones etc being essential to modern existence, despite the privacy reservations felt by many.<p>I'm taking the opportunity to express the idea that it is possible to find appeal in this or that tech feature <i>and</i> to lose sight of the broader ramifications. Each incremental change can be justified (no batteries makes my phone slimmer, 'finding my' is useful, its good to know my blood pressure, etc) but to imagine that the rollout of these features is a natural unfolding of reality without preordained end goal (technocratic governance) is naive.<p>Technocratic governance is not a goal that anyone would accept if it was proposed in a straightforward way - hence the duplicity, incrementalism, destruction of existing institutions, etc.
Relevant SMBC: <a href="https://www.smbc-comics.com/comic/2014-05-27" rel="nofollow">https://www.smbc-comics.com/comic/2014-05-27</a>
The dystopian bit is the corporation having this data, not you. And having this data for all users. Pretty much everyone has a mobile phone.
<i>"The dystopian bit is the corporation having this data."</i><p>I find data collection and tracking disconcerting not because I'm doing anything nefarious, unlawful or something I'm ashamed of—my life is pretty boring and uninteresting—but because it has the effect of reducing one's autonomy. If one knows one's data is monitored and one is being tracked then one acts differently. For some, they'll make minor changes in behavior, for others a lot. Even for those who've no idea about others tracking or collecting data about them there's the broader issue of how society ought to be conducted. That's a very big issue I can't address here.<p>Of course, the authoritarians amongst us would argue that monitoring is good thing as people won't be tempted to misbehave, commit crimes and such. I'd answer that by saying those who've nefarious intent will find ways of circumventing the surveillance so the net effect on society is negative as the 'innocent' are the onrs who are targeted without a good or just reason.<p>Right, pretty much everyone has a mobile phone. What has truly surprised me is how many people are prepared to trade data and being surveiled for convenience. I often wonder if they make such conscious decisions knowing the full implications of that surveillance. I suspect most do not.<p>Having a mobile phone it's inevitable I will be tracked by multiple entities from government to Google. My response is governed by how much I am prepared to trade privacy and autonomy for convenience and I reckon that's a large amount. Not having Google account and killing and or removing all Google services goes some considerable way but I'm under no illusion that Big Tech is still tracking me. That said, the info colllected wouldn't hold much value.
Isn't this the point they were trying to make? The corporations already have that data, it comes with the platform.
The bit where you can use some of it for own purposes is not really the offending part.
No, I think you having the data is also creepy. I wouldn't want my wife to follow all my steps, and neither would she.
Our family (my daughters, wife) has had Find My paired on our phones since, well 2010 when it was introduced. We like it.<p>I remember watching my sister and her husband calling each other frequently with, "How far are you away from home?" and just kind of scratched my head. (I think I have sinced talked them into enabling Find My.)<p>When my wife or I want to call a daughter we can see immediately if they are at work or at home... things like that are nice. There's also an indescribable sweetness to just looking longingly in on a daughter who is far away and wondering how her day is going. Empty nest blues? (Or maybe Miner at the Dial-A-View.)
<i>and just kind of scratched my head</i><p>Can you at least conceive that some people might find it deeply creepy that someone can follow their every step, and equally creepy to be able to 'spy' on another person 24/7?
Sure but those of us who have this sort of tracking for our loved ones enabled arent looking at that 24/7, and if you are referring to the corporations, well they already were. At least now they have made it useful to everyday people.
I can imagine it. (And in this case I think my sister and her husband simply hadn't yet adapted to the new technology they had at their disposal.)
Yes, and they can choose not to do that. If they don’t want companies doing it, they better not have a turned on android or iPhone device with them.<p>But I don’t think it’s a distopian nightmare that I want it.
If your daughter or wife decided that she didn't want to be tracked by you any more, would you happily accept her wishes, or would you assume that she had "something to hide"?
Watching your wife's commute also makes me uneasy. Maybe she wants to get you a surprise birthday present or something. Anyway, I wouldn't want to track someone like that (including children) or be tracked like that.<p>EDIT: toned down the term I used.
"He already has pneumonia, adding lung cancer doesn't matter"
It seems like we don't need a technical solution for this and making one would be dystopian as you point out.<p>Why not just (a) make it illegal so the parents and tech companies see it that way and (b) put some fines on the responsible adult parties if violations are found?<p>Sure enforcement would be spotty and loose, but tickets for parents and fines for tech companies would go a long way to establish the cultural change without the need for "verification"
> Sure enforcement would be spotty and loose, but tickets for parents and fines for tech companies would go a long way to establish the cultural change without the need for "verification"<p>A company can't work like that - the tech company will need to do it's upmost to avoid the fines, and so will implement verification that is both unsafe, incredibly invasive, discriminatory but crucially has an audit trail, and it's insane to suggest they would do anything else.
Yeah - exactly.<p>If companies are motivated and parents are motivated to prevent it and schools have the cover to also police it, then social media usage probably won't hit critical mass for kids and they won't feel FOMO to get it.<p>"it's illegal" is usually a decent enough hard stop to most kids for "why can't I have it"<p>Should just age-restrict smartphones altogether imo lol. Entire generations survived without mobile phones and it would remove entire issues around addictive mobile games, social media abuse, attention/focus issues
what prison time should a 15 year old get if they sign up to Instagram claiming to be 16?
Similar prison time to the one that a 17 year-old faces for buying alcohol by claiming to be 18.
If you put the parent in jail for 6 months and fine the company 1 million per user account, they for sure will start thinking real hard about fixing the problem.
How about a small fine and a short ban? Similar to driving underage.
1 year of community service.
not enough good/grand/noble characters in software engineering, management, leadership and so on.<p>nature-nurture x game-theory anxiety/thrill VS. consciousness & planetary/colony-wide awareness; and I'm not talking about some esoteric spirituality or whatever the fuck.<p>we are always in a natural balance, established by those who do and those who don't. all of us compartmentalise. and good/noble/grand people in law are rare and entirely absent in politics.<p>TV & the radio made a lot of destructive behaviours look cool (all that finboy shit) and so did feel-good literature like Siddharta.<p>and then there's the myth of "hard decisions" that can only be made by certain kinds of characters.<p>young ppl are easy to bend towards hierarchies that reward certain kinds of behaviour and convictions. makes em feel proud.<p>and in the end we are all just doin our job ^^
They surf on smartphones anyway, so it's enough to identify the device itself. Luckily the devices are already authenticated and identified, for your safety.
They figured out that if they get a load of hot girls, and cool goofy guys to entice people, and don't mention the spying part - it works a lot better than a bunch of government goons in badly fitting suits telling you to betray your friends and neighbours, or else.
Sounds like you are advocating for a much different law: one that bans (or at least makes prohibitively expensive) the kind of data mining/tracking that social media companies currently rely on for their business models.<p>Am I correct in this assumption?
It really is almost hilariously formulaic. I mean, it'd be hilarious if it didn't have such disastrous consequences.<p>The slide to dystopia which is described or implied by countless fictional books is, in fact, really easy to pull off. It's death by a thousand socially convenient compromises.<p>I remember when a significant portion of people thought carrying phones around was creepy, or that spending your life looking at screens was antisocial, and these things were actively proclaimed. Contrary to popularly repeated historical revisionist tropes, lots of people had to be cajoled and prodded and pressured into the "digital revolution". Surely others remember that?<p>And now we're at a stage now where if you <i>don't</i> have something like whatsapp or facebook or instagram or whatever, depending on the culture and the age group, there's a real chance that you'll be socially confronted, and even belittled in some circumstances.
> It's crazy that all the things we considered the worst of dystopia in the 80's, thinking nobody would be stupid enough to do, and that those societies in SF books were only distant fictions, are things we are actively seeking now<p>8 years old doom scrolling tiktok is a dystopia we are experiencing right now, pick your flavor
Goddamn, dystopia as a word has reached a new level of dilution.
I pick social media being regulated into becoming healthy for everyone so it doesn't have to be banned for teenagers.
I agree with everything you said. Leaving the morale and ideology aside for a moment though and looking at the technical side, couldn't this, at least in theory, also be implemented in a way that's not so dystopian? Everyone has a SIM card, whether physical or digital, and also a cell plan. If you make carriers mark all underage customers and then have the SIM forward that boolean value to the phone and nothing more, could you in theory still implement something like this without any further breach of privacy? Of course, as with any system there's workarounds, but as a baseline this would work, no?
Maybe things have changed since I was a kid, and I'm not up to date as my kids are too young to operate cell phones, but - does anyone actually buys a plan for an underage person? I'd think it's usually an adult parent/caregiver who buys another SIM card under their own name and just gives it to the kid; the carrier shouldn't have a way of knowing which phone/SIM card belongs to an underage person.
Functionally yes, the plan is in the parent's name. But at least where I'm from, you already have plans that are geared at kids, e.g. they have limits on how big their bill can be so they don't blow it up, certain services are turned off for them such as pay-per-minute calls, and so forth. Perhaps to put a different way, the carriers might not have this info right now, but it would be straightforward to implement this limit this way. Any other approach that I can think of would be both more technical work and also a bigger breach of privacy.
<i>> couldn't this, at least in theory, also be implemented in a way that's not so dystopian?</i><p>It depends on the assumptions you're willing to make.<p>Imagine if Australia introduces a block that works really well, and preserves privacy, but you can bypass it by using a VPN, or by rooting your phone.<p>Would that be good enough, because only 1% of kids will figure out those bypasses? Or will the knowledge spread like wildfire until 90% of kids are using VPNs or rooting their phones?<p>If you assume such bypasses would spread like wildfire, Australia would also need to block all VPNs or make every phone impossible to root. Tough to do <i>that</i> in a way that isn't dystopian.
> worst of dystopia in the 80's<p>Huxley's "Brave New World" was published 1932
In italy we have a digital ID (spid) that's actually managed by third parties authorized by government and you can use it to log in on some services, I think this way it would work
Australia has myGovID <a href="https://www.mygovid.gov.au/" rel="nofollow">https://www.mygovid.gov.au/</a><p>Although I think it's only for government sites. I believe it could probably be used to verify only age in a privacy-respecting fashion.<p>But let's be real. We all know this isn't about protecting children, because it never is.
Sweden has had a "BankID" system since 2003. It's run by the bank branch organisation. It works really well. Almost everybody has it.
And there is a super obvious solution to this,assuming children can't buy phones then the parents should setup the account for the children and enter the birthdate. Now the OS the browser int eh phones know if the user is an adult in that country, or whatever age level needed.<p>If say an adult is incapable to setup a phone/tablet, then a person at the store would help them set the initial part but skip the login with Google/Apple etc.<p>No idea why this was not tried and then see if it fails.
My personal theory is that powerful people didn't want people to be digitally free. It might make them think about fighting for freedoms offline as well.
Have you read this book ? : <a href="https://www.fnac.com/a12617239/Sarah-Kaminsky-Adolfo-Kaminsky-une-vie-de-faussaire" rel="nofollow">https://www.fnac.com/a12617239/Sarah-Kaminsky-Adolfo-Kaminsk...</a>
"What the hell is going on?"<p>"...way to perfectly identify somebody on the internet..."
> As usual, the problem with this is that it assumes a way to perfectly identify somebody on the internet<p>Nope, it does not require this.<p>What you can do is to construct a zero-knowledge proof of the age of the smartphone user. You can do this based on the certificate that is provisioned in modern electronic passports. The proof construction can choose to ignore expiry date, so the government can offer to issue expired passports with blank photos for the instances of people who should not have functioning passports.<p>Then you need the social media sites to agree on a common auth mechanism that uses the zero-knowledge proof, and if the same proof gets used on multiple devices, you log out the previously used device.<p>So even if a parent lets their kid use the parent's passport to generate the proof on the kids phone, every time the kid uses this proof, the parent gets logged out of Facebook and all other social media apps.<p>Then no parents are going to let the kids use their ID, because of the negative impact to themselves. And you have no more means of tracking than already exists today.<p>This also offers interesting possibilities for a startup! In the EU today, there are essentially zero legal ways for a 12-year-old to have a group chat with their friends, apart from RCS and iMessage, which then divides the friend group into two halves. Imagine then a chat app aimed specifically at group conversations for teens, where every user has a verified age. This type of app is something parents will pay good money for.
> Then you need the social media sites to agree on a common auth mechanism that uses the zero-knowledge proof, and if the same proof gets used on multiple devices, you log out the previously used device.<p>> So even if a parent lets their kid use the parent's passport to generate the proof on the kids phone, every time the kid uses this proof, the parent gets logged out of Facebook and all other social media apps.<p>Doesn't this automatically give the "common auth mechanism" perfect knowledge of all the parent's social media accounts, whether under a real name or a pseudonym? That sounds like an additional means of tracking.<p>Also, what if the parent legitimately has multiple devices that they use for social media (e.g., a phone and a laptop)? You might say, "log them out if it's used <i>simultaneously</i> from multiple devices", but then you're tracking all social-media usage everywhere in realtime.
I hope European 12-year-olds are still wise enough to know the age-old tradition of claiming to be 22 when creating online accounts.
A knife can be stuff of nightmares or a very useful tool (even with constant threat of abuse and accidental harm).<p>Location tracking is just another knife.
Yes, but:<p>- You don't have full control over the knife. In fact it's not about your use of the knife.<p>- The people in control of the knife have a poor track record at caring about you, and belong to a category that history has proven we should not trust.<p>- Every time you use the knife, you sign a blank contract giving full authorization to future non-specified knife usage to people you don't know nor do you know their motive. And you trust none of those people, for an indefinite length of time, will abuse it.<p>- Cumulative use of the knife can accumulate stabs that may be delt to you and all your love ones one day all at once.<p>- Use of knife can be automated and scale at the level of all countries.<p>- Most people are not knowledgeable about the knife, don't know much about it, and it's invisible to them.
> You don't have full control over the knife. In fact it's not about your use of the knife.<p>There are many knives in your relative proximity all the time. Some of them are used for your benefit, other are used for other's benefit, every one have potential to be abused against you by evil actor.<p>> The people in control of the knife have a poor track record at caring about you, and belong to a category that history has proven we should not trust.<p>Scrutiny should be proportional severity of harm times the frequency of harm.<p>> Every time you use the knife, you sign a blank contract giving full authorization to future non-specified knife usage to people you don't know nor do you know their motive. And you trust none of those people, for an indefinite length of time, will abuse it.<p>Everytime you enter restaurant or even leave your house you are giving full authorization to people around you who posses knives to use those knives for non-specified purposes. You can only hope they are reasonable and won't cause you harm.<p>> Cumulative use of the knife can accumulate stabs that may be delt to you and all your love ones one day all at once<p>Knives don't need to accumulate anything. One misapplication can end your life literally<p>> Use of knife can be automated and scale at the level of all countries.<p>There are many machines that are basically bunch of fast moving knives. It's on you to not be where they are. Their operators are bound to take precautions proportional to harm times the frequency of harm.<p>> Most people are not knowledgeable about the knife, don't know much about it, and it's invisible to them<p>You usually can't really tell how many knives are around you and most are hidden at most times.
Exactly. You don't allow people to carry knives in public because it does more harm than good.
Some places don't. Some do. In Poland news about ban on knife carrying was April fool's joke. In UK it's a serious law with serious punishments. In UK people who'd carry a knife with malicious intent just moved to carrying caustic chemicals. It's often not specific tech that's a problem, but the culture and incentives for its misuse.<p>Although reasonable restrictions on all technologies aren't inherently bad.<p>Restrictions imposed pre-emptively just because nightmare scenarios are easy to imagine and captivating are not great.
> just another knife<p>Not in the hand of a government that has a monopoly on violence. Most governments in the world dont have your best interests in mind and being tracked 24/7 is a bad deal to get.
God was a dream of good government.
Isn't a smartphone exactly designed to track and surveil individuals on the internet. I only see legal limitations here, and the costs.
Now that you mention it, the would-be dictator that was just elected in the US is allied with both the stereotypical hand-wringing think-about-the-children people (religious groups) as well as with Elon Musk, owner of a major social media platform and future "efficiency czar". Talk about cognitive dissonance...
> <i>It's crazy that all the things we considered the worst of dystopia in the 80's, thinking nobody would be stupid enough to do, and that those societies in SF books were only distant fictions, are things we are actively seeking now.</i><p>> <i>What the hell is going on?</i><p>Because they <i>weren't all that dystopian</i>? Maybe people had the good sense of being able to see the amazing aspects of those future technologies <i>even when</i> the authors tried to paint them darkly? Or maybe we rightly all realized - even if just at subconscious, emotional level - that those fictional dystopias are thin, simplistic narrative devices, and "no one would be stupid enough to" actually implement them?<p>What we didn't predict was adtech. For all the anti-corporate fiction that was part of, or adjacent to, science fiction of that era, the western culture is still obsessed with self-made people, glorifying entrepreneurship, making money. Well, turns out there's hardly a difference between small businesses and large corporations. The incentives are the same regardless of size; sole proprietorships and multinationals aren't some sworn enemies, they're just <i>different size of the same thing</i>.<p>What we didn't realize - and many still don't - is that, while no one may be stupid enough to implement the techno-dystopia outright, many will happily do <i>whatever it takes</i>, no matter how unscrupulous and underhanded, to secure their income. We didn't realize that glorifying "business sharks" means creating an entrepreneurship culture where customers are resources to be exploited, not human beings. Iterate on it enough, and we get to techno-dystopia <i>indirectly</i>, with no stupid steps made on the way - just plenty of exploitative ones.<p>Perhaps the memory of the first 3/4 of the 20th century has us put too much focus on the governments, and blind to private interests. A particular pet peeve of mine, for example:<p>> <i>My German ex was born in East Germany, 11 years before the fall of the Berlin Wall. She thinks people are mad to believe that tracking is not going to be abused.</i><p>Yes, and at least some people who think alike also like to bring up Nazis pulling census records to make their purges more efficient. But frankly, I don't think this is a good argument against censuses, or contact tracing. It's not the census that put minorities in concentration camps, it's the Nazis. They wouldn't have stopped if that data didn't exist, they'd find some other proxy or just do it systematically, like they did elsewhere.<p>A better argument is to perhaps not collect data you don't need. A census without religious affiliation would be less useful to Nazis and the like, and it's questionable how useful having this information was for the purposes it was collected. But my ultimate point is, when your government turns evil, <i>you're screwed anyway</i>. We need to try and prevent it from going bad in the first place. It may indeed involve fighting against <i>badly justified surveillance overreach</i>, but trying to take away tools that an okay-ish government uses for good, only because they'll be used for bad things when the government turns evil, is just self-destructive behavior.<p>I mean, you could argue that teaching a person how to grow strength and keep a healthy body, how to read, write, plan, think, negotiate, convince, are all bad because <i>what if that person becomes a criminal or a dictator</i>? Yes, it would be much better if murderers and tyrants didn't know how to use a knife or a gun, or how to talk people into helping them. But we consider that risk to be worth it because of all the good it brings to <i>everyone else</i>, continuously.<p>And yeah, we're all so focused on the idea of governments going full-Nazi again, that we've ignored the gradual overreach of private interests, which at this point not only actively screws with our everyday lives all the time (hello advertising industry), but also undermines our governments too. Even worse, all the "dual-use" tech and data private interests have, that okay-ish governments <i>would love to access</i> but have good sense to refrain themselves, sits there gift-wrapped for any evil government willing to reach for it. We ended up choosing the worst option of all: all the downsides of tech being abused by private interests now and evil governments in the future, with none of the upsides of okay governments using it for good today.<p>So perhaps we should concentrate less on things like AirTags being potentially deadly in hands of the next Stalin, and focus more on fighting private and public attempts at abusing them early, while maximizing the benefits society can get from such technologies.<p>Or, in short, let's pay less attention to how some tech could be abused in principle, and more attention to <i>people</i> who are trying to abuse others.
Typical hacker misconception regarding the law. You don't need to implement perfect tracking to enforce a law. Make it illegal and then let BigTech use their AI algorithms to detect and block minor accounts. Parents will also play a major part.
Funny how so many middle-aged tech workers, who were almost certainly using some primitive social media themselves as teenagers, are now in support of a full ban.<p>I don't like the outcomes we see with modern social media, but this feels like we're punishing the victim instead of the perpetrator.
Social media that tech workers in the 30s and 40s were exposed to was never any more intense than a bulletin board or a chat room.<p>Social media now is a very different beast. It's designed to be addictive, and it generates engagement through polarization.<p>Is banning social media for children a punishment for the child? Some parents would argue the opposite, that it's a good thing that the child isn't developed enough to realise yet.
> this feels like we're punishing the victim instead of the perpetrator.<p>Wouldn't placing the limits on the social media companies potentially prevent them for operating at a profit? I don't think you're wrong, but I question if you're not removing social media for child and teenagers regardless, the difference is if there will be social media for them to use later in life.<p>Banning social media for those under 16 (or 18) or killing off social media in general do to restrictions in how they operate doesn't really matter to me, even if the latter seems like a bigger win for society in general.<p>Social media was an interesting and at times fun experiment, but it might be time accept that it's not working out as we had hoped.
One big difference is that LiveJournal, Inc. weren't a parasitic company who sought to embrace addiction and analyzing the hell out of their users with algorithms. They had every chance to because the data about interests and friends were just as open as they are on other services - perhaps even more so - but they didn't.<p>I can't speak for the other big companies of the time, like MySpace and Xanga, because I never used them.
Perhaps we support the ban because we saw first hand the damage it can cause to others and ourselves.<p>I would have supported a ban when I was in highschool. I think a lot of teenagers would have.<p>Also I don't understand how a ban is punishing the victim anymore than banning drugs from high-schools is punishing any teenagers.
Same happened with the Drug War. The generation that spend their entire teenage years in the 60s in a haze of MJ smoke became the fiercest prohibition advocates during the 80s and 90s.
Actually this could be a really good analogy on many levels.<p>For example: Comparing the THC levels of the 60s to the THC levels today, and the potency social media of the 90s to the social media today seems appropriate. I mean one can only do so much social media over a slow modem on a home that has a single shared phone line.<p><a href="https://www.newscientist.com/article/2396976-is-cannabis-today-really-much-more-potent-than-50-years-ago/" rel="nofollow">https://www.newscientist.com/article/2396976-is-cannabis-tod...</a>
> middle-aged tech workers, who were almost certainly using some primitive social media themselves as teenagers, are now in support of a full ban<p>How many people do that becasue they have thought about what they perhaps look back on posting on MySpace or Bebo or whatever and think "thank god that's disappeared from the internet"?
Using social media as a teenager is why I think teenagers shouldnt be allowed to now. It was not a healthy experience.
social media for us was chronological timelines of content our friends produced.<p>Not... <i>waves hands</i> whatever the f*%k this is.
Profit, it's profit, almost free, self-perpetuation, self-promoting profit, build on addictions and exploitation of users.
So Mastodon should be fine for the kids to use?
Upvoted for mental imagery.
40-50 year olds did not grow up with social media. Myspace didn’t even exist when they were in high school.<p>People in their early to mid 30s had Facebook come into popularity during high school. I’d wager the majority of that age group would happily ban social media for children having experienced youth both with and without modern social media.
The Facebook we had back then was completely different than what it is today. 100% of the content on the page was contents directly created or shared by one of your friend. No suggested posts, no ads, no news, no "influencers", no politicians, no algorithm, no tracking, etc. Being invite-only at first also means everyone you see on the site are friends or friends of friends, no parents, no teachers, no cops, no media organizations, no companies, etc.<p>You would login and see what your friends were up to, in chronological order.
I wonder how can you implement such a law without forcing people to identify online ? Will they enforce a digital ID that you need to use to access the web or social media ?
No comment on the implementation, but I wonder if there's some value in just allowing parents to be able to point to this and say "No, little Fred, you're not allowed to have an Instagram account until you're 16. It's the actual rule."
Yep, the "everyone else has BLAH" argument is a strong one. If we collectively take action through government to set a standard it is MUCH easier to shut down those self-fulfilling claims.
I'm listening to Australian radio right now and a group of mothers just made this exact point.
Seems to me that the better solution is to give parents the ability to observe their kids' activities, and for <16 accounts to be able to operate only when tied with an adult account, which can observe activity...<p>Of course many will say this can be abused, but <i>all technology can be abused</i> and the reason we're in this mess in the first place is because OS designers haven't figured out that the relationship between parent and child is an important one which should be strengthened, not made weaker ..
Ah yes, teenagers, people with famously little time on their hands and a penchant for following rules.
> allowing parents<p>What? Why would parents need permission from their government to forbid their kid from having an Instagram account? They're parents, so they can engage in parenting.
Not allowing as in "giving them permission", but "allowing" as in enabling them to do so.<p>Right now if a parent says "You can't have instagram. Because I say so." the kids answer will be "But I will be a looser noob if I can't. All my friend are on it. Jenny has 5k followers!"<p>Vs after the ban: "You can't have instagram. This is the law." "But mom! Some of my friends are on it. Jenny has 1k followers!" "Is that so? I will ask Jenny's mom if she knows about that."<p>It is not going to stop absolutely everything. (Same as prohibiting underage alcohol drinking is not stopping teens from drinking any). But it will put a serious damper on it and fracture the social networks into smaller more underground ones.
Allowing as in “enabling”, obviously.
Because not everyone has a computer science degree.<p>They probably want to allow their kids to use a computer, so it would be very easy for the kid to go to instagram when they dont look.
I’m guessing you don’t have kids
The government currently tendering for providers of different systems. See here [1] and here [2]:<p><i>Tender documents released on Monday show the technical trial is slated to begin “on or around 28 October”, with the provider also expected to assess the “effectiveness, maturity, and readiness” of technologies in Australia.</i><p><i>Biometric age estimation, email verification processes, account confirmation processes, device or operating-level interventions are among the technologies that will be assessed for social media (13-16 years age band).</i><p><i>In the context of age-restricted online content (18 years or over), the Communication department has asked that double-blind tokenised attribution exchange models, as per the age verification roadmap, and hard identifiers such as credit cards be considered.</i><p>[1] <a href="https://www.innovationaus.com/govt-readies-age-verification-tech-trial-for-social-media-ban/" rel="nofollow">https://www.innovationaus.com/govt-readies-age-verification-...</a><p>[2] <a href="https://www.biometricupdate.com/202409/australia-launches-tender-to-trial-biometric-age-estimation-and-alternatives" rel="nofollow">https://www.biometricupdate.com/202409/australia-launches-te...</a>
The source for "double-blind tokenised attribution exchange models" is this report from July 2024, from the Australian eSafety Commissioner: <a href="https://www.esafety.gov.au/sites/default/files/2024-07/Age-Assurance-Issues-Paper-July2024_0.pdf" rel="nofollow">https://www.esafety.gov.au/sites/default/files/2024-07/Age-A...</a><p>They note that existing age verification setups largely either rely on providing ID, or on a combination of manual and automated behavior profiling (face recognition, text classification, reports from other users), both of which have obvious privacy and/or accuracy issues. The "double-blind tokens" point to a summary by LINC explaining how they _could_ be implemented with zero-knowledge proofs, but I could not find an article or a practical implementation (could just be a mistake on my part, admittedly)<p>At _best_ you end up with a solution in the vein of Privacy Pass - <a href="https://petsymposium.org/popets/2018/popets-2018-0026.pdf" rel="nofollow">https://petsymposium.org/popets/2018/popets-2018-0026.pdf</a> - but that requires a browser extension, a functioning digital ID solution you can build on top of, and buy-in from the websites. Personally, I also suspect the strongest sign a company is going to screw up the cryptographic side of it is if they agree to implement it...
Sales of stick on mustaches will skyrocket
It's a bit wild that instead of parents just being responsible and teaching their children properly, we'll resort to neutering privacy and freedom on the Internet.
Making it illegal could make it taboo, kids are less likely to talk about it in fear of "getting caught" and less talking means less usage.<p>This is not like porn which is a solitary activity: on social media you have to be social and let everyone know… at least for traditional actually-social media, not content-consuming apps like TikTok.<p>It's similar to alcohol usage: you can't stop it completely, but also you don't have 50% of kids bringing it to school.
> Making it illegal could make it taboo, kids are less likely to talk about it in fear of "getting caught" and less talking means less usage<p>So like piracy?<p>> This is not like porn which is a solitary activity: on social media you have to be social and let everyone know… at least for traditional actually-social media, not content-consuming apps like TikTok<p>You don't? You can stay perfectly anonymous.
A drop down list of birth dates/years "works" for most age restricted sites - I guess the logic is that if a user is lying about their age, it's not the sites problem.<p>Article states that sites must demonstrate they are taking reasonable measures to enforce this though - a lot will come down as to how courts interpret that. If they go to the extremes of the KYC laws in australia I imagine a significant fraction of adults will not want to verify their age.
> I guess the logic is that if a user is lying about their age, it's not the sites problem.<p>If the law is to have any teeth at all, it should be the problem of the service provider.<p>Say for example that a banned feature for minors is having media feeds based on past watching behavior. Lacking a reliable age verification it's simple for social media companies to remove the feature entirely for all users, if it's unreasonable or impossible for them to implement age verification.
<p><pre><code> > extremes of the KYC laws in australia
</code></pre>
Can you provide more details about this statement? I never heard anything about it on HN discussions.
I’m not sure how unusual it is internationally but KYC laws in Australia will generally require 100 points of identification, usually satisfied by showing your passport and drivers license. Other options include recent utility bills, your birth certificate, medicare card etc.<p>The system wasn’t really designed for the internet era and I think a lot of people would not be happy about handing all the personal info over to TikTok or Facebook
As much as I am grateful for most of GDPR, it has shown that leaving the implementation of anything to websites is a recipe for disaster.<p>It's gonna be cookie banners 2.0.<p>I bet a lot will just ask for a credit card number, like in those old scam fake-porn websites from the late 90s/early 2000s.
There’s a near-zero chance of getting caught driving without a license. Despite not having a drivers license checking mechanism, people generally don’t drive without a license because of the mere fact that it’s illegal.<p>Societal signaling is pretty powerful.
RE ".....how can you implement such a law..."<p>Request the social media platform to implement the restriction.
The large social media platform have billions $ cash , so if that "really want to implement it" it should not be a problem.<p>However, I expect social media companies to "drag out every reason , why they can ot implement it..." - since it does not benefit the social media company. ... and would reduce its user base ...
<i>> Request the social media platform to implement the restriction. The large social media platform have billions $ cash , so if that "really want to implement it" it should not be a problem.</i><p>I'm sure that Facebook, Google and TikTok will be <i>delighted</i> to make it mandatory that Australians send in photos of their face, passport and driving license.<p>But is it good for <i>Australia</i> to have their citizens hand such mountains of PII to unaccountable foreign megacorporations?
You don't!<p>That's exactly what they're aspiring to here, following on from a well-established pedigree of Australian lawmakers and their dysfunctional relationship with the Internet.
I don't think lawmakers should describe HOW things are done necessarily. Here it's enough to say that "unless you can be 100% sure your user is above age X, then you can't provide them service Y or feature Z".<p>It might not even be the desired outcome to have identification, the better outcome could be to have feature Z stripped for <i>all</i> users (for example video feeds based on past watching behavior).
It’s happening on porn sites in some states in the US right now. When you visit the site, they ask you to validate with your ID.
You don't necessarily need to actually attempt to globally enforce it. It's like speeding, right? Everybody knows the law, and a lot of people choose to break it. We can't check everybody's speed all the time, so instead we selectively enforce.<p>The real change though comes from parent's perceptions. Right now there's age limits of 14-years-old on most social media platforms, however most parents just see this as a ToS thing, and nobody cares about actually violating it. Once it becomes law, the parents are suddenly responsible (and liable) for ensuring their children are not breaking the law by accessing social media. It's not going to stop everybody, but it'll certainly move the needle on a lot of people who are currently apathetic to the ToS of social media platforms.
The government is being deliberately non-prescriptive about that, as they are about what qualifies as 'social media' (statement of fact - no comment on the approach itself). Ideally the legislation is accompanied by a government digital service that allows 3rd parties to verify age _without_ divulging full identity, but I don't see that side of things being discussed anywhere down here :(
They seem pretty clear [1] about what social media is:<p>Social networks, public media sharing networks, discussion forums, consumer review networks.<p>[1] <a href="https://www.esafety.gov.au/sites/default/files/2023-12/Phase-1-Industry-Codes-%28Class-1A-and-Class-1B-Material%29-Regulatory-Guidance.pdf?v=1730953002185" rel="nofollow">https://www.esafety.gov.au/sites/default/files/2023-12/Phase...</a>
They haven't got the competence to implement it even if they wanted to.
Well, funny you should mention that - the AU government ID system (used to access govt services like medicare and tax), has very recently been rebranded from MyGovID to MyID. Most states have already got digital drivers' licences.
Same as alcohol. If you supply your kids with alcohol, or even have it at home and they get drunk without your knowledge, you'll be in trouble.
Law should <i>not</i> be excessively prescriptive, <i>especially</i> in the case of rapidly-evolving technologies (and business sectors) for all the obvious reasons.<p>What's far more useful is to propose effective <i>incentives</i> and <i>disincentives</i>, and let the participants work this out for themselves. There are some useful principles and examples which come to mind:<p>- Business is profit-oriented. Attack the basis of profits, in a readily-identifiable and enforceable way, and activity which pursues those markets will tend to dry up.<p>- Business is profoundly risk-averse. Raise the risks of an activity, or remove protections or limitations on threads (e.g., Section 230 of the CDA in the US), and incentives to participate in that activity will be greatly reduced. Penetrating corporate and third-party veils would be particularly useful, in this case, of service providers (aiding and abetting in a proscripted commerce) and advertisers (profiting by same). Lifting any limitations on harms which might occur (bullying, induced suicides, addiction, or others) would similarly be crippling.<p>As to how age might be ascertained:<p>- Self-reporting. Not terribly reliable, but a decent first cut.<p>- Profiling. There are <i>exceedingly</i> strong indicia of age which can be made, including based on a particular account's social graph, interests, online activity, location data (is the profile spending ~6h daily at an elementary school, and not lunching in the teacher's lounge?), etc. One strong distinction is between <i>legislation</i> and <i>regulation</i>, where the latter is imposed (usually with rulemaking process) through the <i>executive</i> branch (SCOTUS's Loper <i>v.</i> Raimondo being a phenomenally stupid rejection of that principle). Such regulation could then on a more flexible basis identify specific technical means to be imposed, reviewed, and updated on a regular schedule.<p>- Access providers. Most people now access the Internet through either fixed-location (home, work, institutional) providers, <i>or their own mobile access provider</i>. Such accounts could well carry age (and other attestation) flags which online service providers could be obliged to respect as regards regulation.<p>Jumping in before a few obvious objections: no, these mechanisms are not <i>perfect</i> but I'll assert they can be <i>practically effective</i>; and yes, there are risks for authoritarian regimes to abuse such measures, but then, those are already abusing <i>present</i> mechanisms. I'd include extensive AdTech-based surveillance in that, which is itself ripe for abuse and has demonstrated much of this already.<p>(That said, I'd welcome rational "what could possibly go wrong" discussion.)
Remember the Silicon Valley episode where they would have been fined $21 billion for not verifying the age of PiperChat users? Same way. All companies are one slippery slope away from being fined by Missouri for not protecting children enough. Or Australia.
I fully support this. In fact, make it 18. I see it like a new type of drug. Future generations will be horrified that we permitted children to connect their brains to attention-optimization algorithms running on supercomputers.<p>Children can not consent. They can't sign contracts. They don't understand the ramifications of what algorithmically delivered content does to you.
I have one primary worry about this. Do you spend much time on college campuses or around military bases? If so, you know that credit card companies and shady car dealerships <i>loooove</i> the newly 18 year old adults.<p>I think that introducing people to social media right when they'll be on the hook for all their bad decisions during the exploratory period is going to result in a lot of 18-20 year olds in a ton of debt + them being even more hooked on social media than kids are now because they'll also be away from adult supervision. Imagine the sports betting ads for the newly 18. The influencers puffing up the newly 18 about how adult and mature they are and all adults buy product X, etc. They'll have no way of knowing what's normal to share and will probably overshare, but unlike minors, they'll be legally able to torpedo the rest of their lives. Etc.<p>Making it 18 puts a target on their back for the ad and social media companies: Fresh meat that aren't entitled to any protection. That seems like a bad combination to me.
I mean if 13 year olds had cash to spend and were able to buy cars and open credit cards, they’d target them too. It’s not because they aren’t inoculated, it’s because they are susceptible to the negative consequences at that age regardless.
On that point, I don't think most adults understand the ramifications either!
18 seems terrible. You're making a fraction of the kids miss out on their friends' lives just before they're about to go their separate ways and potentially never see each other again. At least give them a bit of time to naturally settle into the new social environment before they permanently part ways?
They can build those relationships without being hooked into the slop machine.
Not disagreeing with you. But we (those of us old enough to remember) still kept in touch with our friends that moved away to different unis before social media existed.
You're missing the issue I'm referring to. Staying in touch with you is very different when the communication mechanism doesn't pose extra hurdles and burdens compared to the majority of your cohort. It's also very different when you're <i>already close friends</i> by the time you part ways. The issue here is that some people will have social media and some will lack it, putting the latter at an artificial and entirely avoidable disadvantage, causing them to make fewer of those close friendships in the first place compared to their peers. This disparity is the issue I'm talking about, when the bonds aren't strong enough yet.
I disagree.<p>If the relationship is valuable to you, it requires maintaining. Following each other on social media isn't the same.<p>Not to mention social media is filled with someone's best moments which give you a bad view on how you're doing against your peers.<p>Not to that half of the posts on social media are ads, which aren't good for you at any age.
> If the relationship is valuable to you, it requires maintaining. Following each other on social media isn't the same.<p>Maybe not to you, but to some people it is incredibly valuable for maintaining relationships. And these are kids about to part with most of their friends for the first time, you can't rely on them having learned your insightful life lessons by this point. I'm guessing you've never been felt the negative effects being left out of a social circle can have? Relationships aren't binary, not everyone is best friends or strangers. Lots of room in between for people to bond or be left out.
With any regulation there are pro/cons. You've highlighted a con but there are significant pros to overcome.<p>I grew up in a pre internet world. We had "negative effects being left out of a social circles" then too. Social circles being hard is not going away regardless of how this decision lands.
> I grew up in a pre internet world. We had "negative effects being left out of a social circles" then too. Social circles being hard is not going away regardless of how this decision lands.<p>The fact that you never grew up in the social media world highlights exactly why you don't understand the problem despite thinking you do. Your good old "social circles being hard" issue is <i>not</i> the problem here.<p>Please understand that there are plenty of situations where someone gets invited to something if and only they are <i>visible</i> to others and <i>easy</i> to invite. i.e. there exist plenty of situations where being on the platform is the <i>sole determining factor</i>. And that being off the platform that the majority use puts a <i>very significant, ongoing, and asymmetric burden on the host</i> (or invitor, if it's a different person) to keep them posted on all the details that did not exist otherwise, and this fundamentally makes one less likely to be included, in an entirely natural and unavoidable fashion that is no fault of anyone involved, and has nothing to do with "social circles being hard".<p>You have to recognize when the system is making a problem worse than it naturally is.
> You have to recognize when the system is making a problem worse than it naturally is.<p>The irony. How can you be certain that it's not social media making "problem worse than it naturally is"? Many people believe that social media is.<p>As someone who was not particularly visible, and was not invited to every party, I am very aware that "there are plenty of situations where someone gets invited to something if and only they are visible to others and easy to invite". Is "not being on social media" really all that different than "not being on the <insert sport> team"? The same arguments apply. Easier to see and invite teammates, more social cache, etc.<p>In any case, if all teens were off social media then it would not be a determining factor, and I'm sure alternative systems would emerge for inviting people to places.
> The irony. How can you be certain that it's not social media making "problem worse than it naturally is"? Many people believe that social media is.<p>It <i>is</i> the problem. I don't think you understood my point. If <i>nobody</i> in high school had it, I wouldn't be bringing it up. The problem here is some students will have it the last year and some won't. By seeing it at 18, you're making it worse for the younger kids. Either 19 or 16 would be better.
I didn’t think that was your point so I did misunderstand it. I agree this is a concern but I’m not sure how it’s totally avoided. 12% of high school seniors graduate at 19.<p>In any case, I’m probably not the right person to determine the cutoff criteria. Whatever the boundary, that’s going to be a rough time for those going through it.
> I’m not sure how it’s totally avoided. 12% of high school seniors graduate at 19.<p>I mean there's a reason (well, multiple) why I suggested 16 instead of 19. I just pointed out that 19 to try to get my point across about social media being harmful.<p>> Whatever the boundary, that’s going to be a rough time for those going through it.<p>Again, this wasn't the point. The point was that if you do this in the last year of high school, you're not <i>just</i> giving younger kids a temporary rough time for that one year. You're <i>also</i> robbing them of their very last opportunities to form the stronger long-term connections they would have made during that time, which inflicts life-long damage to them socially. If you put the cutoff a little earlier so they all have a year or two to adjust to the new social environment, you avoid that long-term harm.<p>Also, nobody said the cutoff <i>has</i> to be rounded to the exact day. It could just as well be moved to coincide with the start or end of a school year, avoiding this problem entirely. And it'll be easier to enforce in school too.
> Maybe not to you, but to some people it is incredibly valuable for maintaining relationships.<p>Communication is incredibly valuable for maintaining relationships. Whether that's through social media, IM, calling them, hanging out in meatspace. THAT is valuable. Not merely following someone on social media.<p>I am in my early 20s. My highschool life (and some after) was filled with social media use and borderline addiction. I added people on social media that I kind of knew, or talked to once, or wanted to talk to.<p>But literally none of that kept me in touch with most if not all of them. More than half of them I never messaged, never interacted with on social media.<p>What kept me in touch with them was me or them. I deleted all my social media. And the ones who make the effort to contact me out of social media? Those are the real relationships.
> Children can not consent. They can't sign contracts. They don't understand the ramifications of what algorithmically delivered content does to you.<p>I’d argue, neither do the majority of adults.<p>I’m still for this ban because a young brain is so much more malleable and hence much more at risk.
This is awesome. I have been telling that social media is like smoking. When cigarettes came, even doctors were advertising the benefits of cigarettes. Now we know the harmful effects. Same is the case with social media. We just dont know they harmful effects completely yet.<p>Ban this. I am addicted and can't stop. Or put a warning on social media apps like they do on cigarette packets. Using this app maybe harmful for your mental health.
No we need to kill the advertisement based monetization model, so there is no incentive to produce addictive content.
Advertising is one of the less harmful monetization models out there, especially for kids. YouTubers soliciting donations, selling merchandise etc. are praying on young kids. If you take away advertising, these more direct methods will be used more heavily.
> Advertising is one of the less harmful monetization models out there, especially for kids.<p>No, because it incentivizes the addictive content.<p>> YouTubers soliciting donations, selling merchandise etc. are preying on young kids.<p>That's a different issue, and already happening. It should be banned too.
If mastodon was used by teens, it would have all the same problems.<p>Advertisment is not the problem here.
Mastodon doesn't have the incentive to try to spy on everybody and try to get everybody hooked by any means necessary.
Masto has no social graph. Its just like IRC and PHP forums that many of us here used as teens.
Right now this is perhaps true, because mastodon has to compete with commercial social media.
We all want this regardless. Yet we don't. We also want innovation, and that means we have to vote by avoiding the amount of shit coming down the pipe. So for starters I use an ad-blocker. But platforms will still strangle free speech because of their business model acting out.<p>I think for starters we should have competition, therefore mandate federation between platforms. No big winner-takes-all tech monopolies. But the USA doesn't want to give up the profit, nor the power that comes with it, to own the world.<p>Perhaps as an example, Google performs an auction in the milliseconds before serving up ads. So it's possible.<p>In the end platforms will integrate an AI buddy that will come to know us even better, and that represents our ways. Of course we'll want that, but please, federate.
You can put a warning everywhere, because everything or anything may be harmful to your mental health. <i>sighs</i><p>"This is why we can't have nice things".
This is an attack on freedom.
To be fair every single law in the world is an attack on freedom if you're a 5 IQ freedom absolutist.<p>You can't drive a car without headlights, you can't drive a car without seatbelts, you can't kill your neighbours if he fucks your wife, you can't open a shop selling BBQ dog ribs... life is so unfair =(
I don't know what to tell you... Minors are supposed to lack freedom compared to adults in a society.
How do you think social media is going to verify that you're not a minor?
no, it's OUR freedoms that are being messed with, not just the kids.
Let kids have guns, can drink booze, and smoke too while we're at it?
I mean you could make an argument that outlawing murder is also an attack on freedom. That doesn't mean it isn't a silly argument however.
Yes and rightly so
[flagged]
They'll pick the biggest ones. They won't block them. It'll be up to the platforms themselves to prevent access to children under 16.
This place is no better
Weak minds love government intervention over parental responsibility. It’s awesome! Get your IDs out, everyone, think of the children. JFC.
Here's the reality: some significant % of parents will have zero parental responsibility and are ill suited to be parents.<p>This is just a fact.<p>How to solve this?<p>1. Only allow suitable responsible people to be parents. Would you like this? This would be highly unpopular and wouldn't happen.<p>2. Or alternatively you enforce uniform rules across all population of minors, thus children without functional parents have limitations and protections in place as if they had parents with resposibility.<p>The second option is simply more palatable and feasible to be implemented.
Then parents all over the world have failed.<p>Even the successful ones have limited shelf life before the kids get isolated from the friends and losing out the actual real world Social Life.<p>The tone of your response signals that you're definitely Americans. Americans have a different culture and perspective when it comes to Freedom, Government intervention given your history.<p>That doesn't mean your perspective fits in other countries.
Could not agree more.
Absolutely, and if people can’t be good parents they shouldn’t be.<p>Maybe people should be given permission to have kids. Then the only people who have kids are the ones who can carry that responsibility.<p>So many parents today dont have the education, time or knowledge to truly understand what their kids are doing. Just imagine, we need to curtail our freedoms, so that these weak parents can not screw up their kids because of their inability to adapt to the world.<p>/s<p>PS: for what it’s worth, I doubt this bill will pass.
Creating verifiable IDs for children is a failed dream, and cannot be achieved; even the most draconian interventions will have errors.
[flagged]
Hard agree. It's our generation's smoking. Zuckerberg's recent PR glow-up is just this generation's Marlboro man. An acceptable face on something much more sinister.
Apart from the logistical issues with tracking and verifying everyone's identity, I think cutting teenagers off from an (admittedly very manipulated and dysfunctional) source of community is not a straightforwardly good idea. To give some examples, isolated LGBT teenagers probably benefit from being able to find and talk to people like them, and people into all kinds of niche hobbies and interests can be inspired to learn and create by other people into the same thing (cosplayers, digital artists, electronic musicians, etc). Also social media is used for organizing a ton of IRL events as well. (It would be nice if online community was not all centered on Facebook/Twitter/etc, but unfortunately that's the current situation.)
Another view might be that kids are better left off exploring their identity without agenda-driven strangers influencing them over the internet.
It's a lot easier to explore one's identity when you're talking to people who have similar parts of their identity. I'm curious how one would do that, on any medium, without being exposed to 'agenda-driven strangers'?
What are you talking about? Finding your people is one of the main benefits of the internet for minorities or folks with niche interests. We're literally on an interest based community right now.<p>There's this weird belief in certain circles that being gay is contagious and if we could only keep our kids away from seeing them they'll turn out "normal" and its bollocks. They'll just be unhappy and ashamed of themselves, think they're broken and not know why.
Ah yes, when you are a minority living your life is just 'an agenda'. It is so much better that kids feel alone and isolated than knowing facts that there are other people like them. Fuck that and fuck you.
> (It would be nice if online community was not all centered on Facebook/Twitter/etc, but unfortunately that's the current situation.)<p>It's really not. Discord is very common too.
Good, but not enough if you ask me. Mainstream social media make money out of angering people and the addiction it creates, and it affects everyone, not just kids: had a few grown ups among friends and other people, even over 60 and older, completely ruined by that crap. I don't see any reason why corporations that don't obey any moral obligation should be motivated to change their business model anytime soon, unless forced from above.
What about non-mainstream social media? What about news services? What do you think would happen if we banned both, those people that get angry and addicted will just be happy and healthy? Or do you think corporations will find another avenue to try and manipulate people, which those people will happily flock to?
Non mainstream media by its definition probably doesn't attract much people and damages would be restricted; I think the most damage happens in crowded echo chambers where a critical mass, assuming there is one, would be easily reached. Yes, very likely corporations would find other ways to exploit people addiction, and educating kids before it's too late is a good move, however I don't expect much cooperation by parents.
In the '90s, a lot of dial-up BBS were owned by <16yo teens. Later the same happened with internet forum. Today, you'd think 16yo is old enough to have your own social media. Maybe a Mastodon or Lemmy instance, or something unfederated like an old phpBB forum or something. Yet the australian gov think these same people aren't old enough to access a social media! They should incite their teens to have one in their bedroom, fcs.
How about over 60? That's more likely to have a positive effect on society
Last I checked if you're over 60 then you're an Adult and can do adult things. You have long since stopped growing into your body by then. Children are not finished yet, and continue to have their brains mature and grow into the early 20s
Banning <16 year olds from social media is for their protection. Banning >60 from social media (and I’d add voting) is for everyone else’s protection.
I think there is a better argument for banning anyone under the age of 30 from voting than there is for anything as low as a limit of 60 for voting.
So as a compromise only allow people between 30 and 60 to vote?<p>(Which would be hilarious, as it means, allmost no one voted in office would be allowed to vote themself anymore, unless of course that would change, too)
On the flip side, it seems evident that younger people tend to vote for the betterment of all, while older people tend to shift toward voting for «themselves».
>I think there is a better argument for banning anyone under the age of 30 from voting<p>Please expand on this.
Damn ageism. There’s some amazing 60yo assembly devs out there.
This is akin to speed limits. Doing 50 km/h on a 20 km/h street isn't going to kill you, but can kill people outside your vehicle.
and in advanced age there is cognitive decline<p>the median seventy-five year old’s brain is not in the same condition as the median thirty year old’s
It doesn't matter because the 35 year old isn't using his brain to vote anyway. He's just going by what social media, the news, and peer pressure leads him to. If anything, an older person has better established understanding of the world even if they're not better at working things out.
75 and 60 are vastly different.
Banning those over 60 is a funny jab against an older generation who are susceptible to conspiracy theories and who are not media smart. They have lots of trouble differentiating sound and noise. I doubt this person was serious. Nevertheless, it was funny.
If at “over 60 you can do adult things”, why do we take their drivers licenses away from and make them sit a drivers test every year unlike adults under 60?
They don’t take their license away, they just test more frequently to watch for deterioration of vision, etc.<p>Also, people under 60 still need to test and qualify for a license. So I’m not sure why you went down this comparison route.
Because it's in society's best interest not to have impaired people driving; whether it's due to substances taken, illness or age.
Countries around the world need to bite the bullet and implement that for voting too.
Thats too simple.<p>The problem is that a lot of democracies have a demography skewed towards old people and at the same time a simple majority can dictate over s.th. like a minority of 30% of the voters.
The claim is that it protects the mental health and welfare of young people. Young people have special protection from the law in many regards including sexual consent, contracts, employment etc.<p>A 60 year old is expected to have the capacity to make informed choices. Whether they do is down to personal responsibility. Many of us will be 60 one day.
What sort of positive effect would you expect?<p>What do you think is wrong with people over 60?
How about we only allow those who have our same ideas? That's great for democracy, true democratic values
How's that?
Even better, how about <16 to >16?
I've always been a supporter of those, my view on democracy is not to be mature enough, but to be a part of society with needs and ideas, school students neee to be represented because they're part of a society
Do you attribute the Trump victory to people over 60? What about German right wing parties? I assume similar can be found in the U.S., but I did not have time:<p><a href="https://www.nzz.ch/english/why-are-young-germans-voting-for-the-far-right-ld.1846613" rel="nofollow">https://www.nzz.ch/english/why-are-young-germans-voting-for-...</a><p>The most insane left culture warriors who wanted to relive a second spring on the other hand are indeed approaching or over 60. Fake culture warriors like Pelosi and Biden are over 80.<p>Fake culture warriors in software organizations like Python are approaching or over 60. You know, the exact same people who read Ayn Rand and supported ESR when it was politically expedient for their careers 20 years ago, because they always go with the herd, no matter how stupid it is.<p>These culture warriors, through their purges and insane statements, have so thoroughly destroyed any trust in the Democrats that there is a landslide victory for Trump now. It will take years to restore that confidence.
[flagged]
> We don't have any studies showing that old people harm society using social media<p>an an american who just watched the recent election go down, I have to strongly disaagree with this.
That's obviously a biased statement. I happen to share your bias toward the election outcome, but I'm afraid analysis shows that GenZ has moved towards Republican in comparison with similar aged voters 4 and 8 years ago, while people over 60 have moved towards Dems. So your assumptions are not correct, apart from the fact that they're not even relevant to the discussion about the influence of social media (and mobile phones) on young people.
How can you justify that view? Harris lost because of voter turnout. I don't see how >60s are to blame for that. I also don't see how >60s are to be blamed for young people moving right.
> <i>I also don't see how >60s are to be blamed for young people moving right.</i><p>Rebelling against the perceived establishment as presented to them by their teachers (some of whom are that old) probably had something to do with it. (Their parents too, but their parents aren't >60)<p>Young men particularly have been getting dosed with idpol messaging about men by their predominantly female teachers from a young age. Anecdotally, there's also a general thing where the older a teacher gets the more embittered by experience they become, and they become more likely to fall into negative behavior patterns like having "nemesis students" they like to pick on, or exhibiting flagrant bias in the classroom against an entire gender.
[flagged]
Yes, after years and years, we only recently have solid studies showing that social media harms individual children. I expect it’ll take a bit longer to finalize studies showing the harm to <i>society</i>.
> Older adults are relatively more susceptible to impulsive social influence than young adults<p><a href="https://www.nature.com/articles/s44271-024-00134-0" rel="nofollow">https://www.nature.com/articles/s44271-024-00134-0</a><p>The number of test subjects is small, but hopefully this will lead to more funding for larger projects.
>Stupid, snarky comment intended to create division, that belongs in a trash heap like Reddit or Twitter, not here.<p>This also doesn’t belong here.
Hot take: most reddit discussions are every bit as good as HN comments. There is no need to make HN "not Reddit".
[dead]
While I dislike social media, this ban is as stupid as Australia's laws enforcing bicycle helmets.
Will this mean I will have to register as my real name on Hacker News? Not a chance
How is a bicycle helmet a bad idea? Many countries have that for minors. And obviously most(?) western countries have seatbelt laws and motorcycle helmet laws.
It's also a natural effect of having publicly funded healthcare I guess.<p>> Will this mean I will have to register as my real name on Hacker News?<p>First of all, age verification shouldn't mean the social media provider gets true identities. They shouldn't be trusted with that info. There needs to be services that allows verifying your age against one service, and the media service just getting the receipt of that verification. Whether such a service exists already or not shouldn't matter. The law should be written so that social media companies are restricted in what they can do when they can be sure someone isn't a minor, and when they are sure. For extra safety, perhaps it should say they <i>can't</i> be allowed to see for example physical ID:s. Because otherwise you'd risk privacy issues.<p>Second, I think it's better to formulate these laws the way the new york draft did: that specific <i>features</i> are restricted for minors. Such as: enless media feeds based on past behavior (such as any video "shorts" feeds in all the major platforms today).
> this ban is as stupid as Australia's laws enforcing bicycle helmets.<p>You're damn right, same as the law enforcing seatbelts, cars head and brake lights, ABS, testing of tap water quality, testing of food quality, &c.<p>I want absolute freedom to get utterly fucked by the first mega corp or dumbass who want to do it !!!
As sibling comments seem to have missed the point: laws mandating helmets reduce the general rates of cycling, as people without helmets don't cycle at all. Cycling is so good for your health that the risks associated with not cycling are actually greater than those that go along with cycling without a helmet.
Enforcing bicycle helmets is a good idea. It's about protecting your health and reducing the burden on the public health system.<p>I've fallen off a bike before and my helmet definitely saved me from a serious head injury. Would I have worn one if it was not compulsory and drilled into me as a child that's what you do when you ride one? Maybe not.<p>It saved me that day and I expect it saves many people in this country every day too.
> Would I have worn one if it was not compulsory and drilled into me as a child that's what you do when you ride one?<p>Yes. Because this is a false dichotomy. The latter does not depend upon the former. I can say that with certainty because I received the message growing up in a country with a cycling proficiency programme in schools instead of mandatory helmet laws.<p>Everyone should wear a helmet when riding, but <i>criminalising noncompliance</i> is an inefficient, reductive, expensive, heavy-handed, unnecessarily punitive, and ultimately counter-productive approach to achieving it.
> It's about protecting your health and reducing the burden on the public health system.<p>So is controlling what and how much people can eat to prevent or reduce obesity.<p>From now on, you are only allowed to eat broccoli (I actually love broccoli) to ensure you will not be a burden on the public health system.
Don’t many US states have laws requiring bicycle helmets for minors?<p><a href="https://www.iihs.org/topics/pedestrians-and-bicyclists/bicycle-helmet-use-laws-table" rel="nofollow">https://www.iihs.org/topics/pedestrians-and-bicyclists/bicyc...</a>
And this is why we try our best to have minority governments.<p>The bills being put forward lately are really concerning and I have no idea how to get a party in that would kill the eSafety Commissioner and put in strong freedom of speech laws.<p>I'm pretty content seeing the web destroy their website blocking measures. Thank you DoH and ECH!
I would remind everyone that the ban of porn below 18 is not enforced, but it is enough to ensure it is not consumed openly or at school. That is how this will play out too.
If a kid wants to find porn then so be it. If an adult Australian citizen or media company wants to show them porn then that is probably a criminal offense.<p>Kids also find alcohol. Can't supply as an adult or company.<p>The problem for the social media companies is that their recommendation algorithms and content make it very clear they know exactly what ages are watching and what they are showing them. They are multi-billion dollar multi-national operations and not some kid running an iffy image board. I usually hate the Australian government's dumb takes on anything tech or Internet (cryptography, internet filters) but in this case I think pushing regulatory compliance on large companies is what makes this doable.
Porn isn't consumed openly or at school because of social effects (read: it is embarrassing), not because of laws. If schoolchildren are the pliable rule-followers you imply, why do they all vape?
Not sure what kind of maniac would "consume" porn openly at school. In contrast, I doubt most kids will have any such misgivings about social media, especially if not enforced.
A lot of you who are supporting this are unaware that to enforce this, EVERYONE will have to have an online government ID and the government will be tracking EVERYONE's internet activity.
prohibition never worked and won't ever - the only way to make it work is by implementing a total surveillance state with draconian punishment for noncompliant citizens. but given that i'm more or less the only one realizing that i just make peace with what is coming.
'Prohibition didn't work' is an almost meaningless statement. It's always some variation on 'yeah, drastically fewer people drank, but it didn't eradicate it <i>entirely</i> and cost a nonzero amount to maintain'. Same as banning anything ever, like murder, or unregistered securities.
prohibition of alcohol didn't just cause less people to drink ... it provided the breeding ground for organized crime, caused people to go blind from drinking self-made liquor and started the war on drugs which eventually led to the fentanyl crisis.<p>my body, my choice.
But we already prohibit children from drinking alcohol. How is social media any different?
And that's where the analogy falls short.<p>You think teens are going to start using darknet social media?
That assumes "make it work" means making it work 100% with no gaps or misses. If it only works 90% of the time then it's still working quite well and you get most of the benefit from it without needing <i>"a total surveillance state with draconian punishment for noncompliant citizens"</i><p>For instance, it has always been possible for sufficiently motivated kids to acquire hard liquor. If nothing else they can steal it from their parents. To stop this you would need surveillance inside every home and extremely harsh punishments. But we <i>don't</i> actually need that, because an imperfect prohibition works reasonably well.<p>The whole premise that if a law isn't perfectly enforceable then it's a bad law is a weird thing that techies on the libertarian/autism spectrum come up with a lot, but it's not the way the world actually works.
prohibition of alcohol didn't just cause less people to drink and more to drink less ... it provided the breeding ground for organized crime, caused people to go blind from drinking self-made liquor and started the war on drugs which eventually led to the fentanyl crisis.<p>my body, my choice.
Prohibition of hard liquor <i>for kids</i> causes negligible organized crime. Furthermore, it is unlikely that prohibiting <16 kids from using social media will trigger an organized crime wave.<p>The failings of blanket alcohol prohibition in America do not automatically transfer to all forms and instances of prohibition. Thinking otherwise is a trap that techie libertarians often fall for, for some reason. Science should study it.
or you are looking at this maybe a little rosy colored. the tabacco ban in new zealand, as well as same ideas for england, would affect also grownups. also - what is social media anyway? i don't think this can be as easily defined as a molecule. and it's supervision would require surveillance. and that's not a good thing. sure kiddies need boundaries and guidance but the broader picture shows that such ideas are usually connected to extending the same thinking to adults. and again - my body, my choice. i do not accept the state telling me what not to do when it isn't impacting somebody else on an individual and direct level. end of story.
> when it isn't impacting somebody else on an individual and direct level<p>But it is. Children using social media put active pressure on other children to use social media too. Not using social media today is very difficult for children, since it can make social integration a lot more difficult. Children using social media are actively pushing other children into addiction. Children using social media are actively harming others.<p>Your body, your choice. But if you regularly run over people, I will take away your drivers license.
hey, it definitely works! it lets politicians pat themselves on the back like they did something and get some brownie points and maybe even more people to vote for them.
Start with 16. Increment it every year.
Tasmania tried doing that with smoking and it kind of didn't go through due to technicalities.<p><a href="https://www.smokefreetasmania.com/new-law/" rel="nofollow">https://www.smokefreetasmania.com/new-law/</a>
And make the home page an ugly green with disgusting pictures of health problems?
I'd suggest 61 as well, and closing the gap ;-)
Love this!
I support the ban.<p>In terms of enforcement, social media platforms already use algorithms and gather huge amounts of data on their users, enough to make a good estimate of age even if a user has signed up with a fake age.<p>So, when the algorithm detects that a user is likely to be underage, that's when they'd be required to show ID.
I put in a submission to the committee for this issue[1]. The big issues from my point of view are widespread ID validation and the security and privacy consequences of that, definition of social media, lack of controls provided by social media websites, and further risks to centralisation (like ID providers requiring an app that can only run on an iOS or Google Play device).<p>Many of the ID verification services that have spun up over recent years like AU10TIX are private companies that don't have their users' interests at heart. It wouldn't surprise me if they become more involved with the so-called data economy (data broker ecosystem)—if they aren't already.<p>Meta itself causes harm to users of all ages with their algorithms (like suggested content on the feed) which can't really be turned off, and fueled the misinformation crisis which really took off a few years ago. The social media companies have done a good job of convincing the Australian government to overlook these harms.<p>1: <a href="https://roffey.au/static/submission-social-media-2024.pdf" rel="nofollow">https://roffey.au/static/submission-social-media-2024.pdf</a>
So when China regulates the internet, it's a dictatorship; when a western-affiliated democracy does it, it's... good?
I can't answer "whether censorship good or bad", but one thing I think is true and needs to be clarified is that there is a democratic process for such legislation -- representatives debate the topic and vote on them, and if they do become law, they may be challenged in the court. They don't just happen. By comparison, censorship in China is mandated by a number of different laws and "rules" with extremely broad and vague words, some of which date back decades, with the explicit intention to be able to arbitrarily enforce the laws based on whoever is in power to interpret them, and... good luck challenging them in courts.<p>Of course there are lots of caveats here -- e.g. in the US model, what if the executive branch, legislation and the court are controlled by one person (which we'll see very soon), and in the other hand, in China there is actually a parliament and a formal process for many of the legislations (even though that doesn't mean much). Still, at a very high level, there is a distinction.
uh yeah, dictatorships are bad. Society determining their government and enacting laws through consensus is good. Is that controversial now?
[flagged]
It sounds righteous for sure, but the implementation sounds terrible.<p>Let's suppose kids do get forced off the mainstream social networks with Australia legal entities (Meta, Goog, Bytedance, X?). What stops them from joining fringe social networks operated outside of Australia with even less oversight? Surely nothing bad could happen between kids on those networks.<p>Yes, the Australian government could DNS block them like they do torrent sites.
But it really wouldn't be beyond teenagers to inform their friends about how to change DNS servers...
That's what I thought.<p>How do you verify the age? Verifying ID? That seems like another route that could go wrong. Just having a checkbox to confirm the age? Do a poll on this website to see how many people have visited an adult website when they were not...<p>As long as you have internet access, there will be a way.<p>I don't have a good proposal, but I feel this is more like a cultural issue than something we can simply fix by implementing law.
this is fake legislative action, there is no way to legaly enforce this, short of a
complete and total policing of the internet and peoples phones,and the quiet
part is that by seperating the "law" into
all of its beurocratic bits,such as needing to "ratify" it later....., and then actualy create and fund some sort of enforcement body at some further and impossible to predict time
which all then points to a desperate and floundering government, resorting to the lamest kinds of tacticts to buy a bit more time at the trough
oink oink
Imo would be nicer if algorithmic feed & targeted ads would be banned for them. So basically a social network with chronological feed and no targeted ads should be ok-ish. This would allow them to use the tech without the addiction mechanisms and influencing
There are parental controls on most phones. Educating parents on how to use them is a better option than a blanket ban that'll either: a) not work or b) require you to use some digital / national ID to register.
It's crazy that human beings with the new fandangled ability to communicate in ever easier ways have created this problem. This is your enemy. Social media, government rules are not the enemy. Nor the saviour you think it might be.
I'm not sure to see the value of going all the way to 16. 14yo would be a massive win already.<p>The cost/benefit of social media between 14 and 16 is much more favorable to social media than <14.
The way to give this teeth is for any proposed law to charge the parent or guardian of social networking children, as well as the social networking site. Once it hits the parents' pockets they'll start to get involved in their kids lives and see what they're looking at on the web.
I wish they would make some effort to restrict what inputs the social media algorithms can use to target people, instead of this ham-fisted idiocy.
Regulating this is always going to be impossible, and when it’s not, you’re in a surveillance state and you have bigger problems than the age you’re allowed to be on social media
"Australia Proposes Ban" is way too common a headline.
I agree with others whom see the identification problem. But I also think that it is a right step on the way to help improve the current situation
What even counts as social media? Is Hackernews social media? Is my future platform where people can talk to each other social media?
It's all pure desperation, they could instead force social media companies to only promote useful educational, pro-science, pro-fitness, documentaries, family style content and then social media would be helpful. Forming communities around learning, robotics, science? What could possibly be better for children who look for purpose in life? It would be fantastic. But of course half the grifters on social media are also already hiding in those tags and serving the most shallow, useless, fake content about e.g ancient pyramid aliens or discussions about how veganism will help your body. As you can tell by my last little insertion here, half the problem is that even all the adults can't come to a shared understanding of what is "good" or true.
> What even counts as social media?<p>I think the way the EU approached this with their "digital gatekeepers" is smart. Recognize that policing the entire internet isn't possible or even desirable. Focus on those few companies with the largest capacity for harm. Different criteria might be appropriate when focusing on potential harm for children (e.g. Roblox rather than Twitter) but besides a few changes you'll probably end up with roughly the same list.<p>I'm not sure I'd support an outright ban, but rather very strict monitoring and requirements around moderation, in app purchasing, gambling mechanics, and so on.
Australia takes a different approach and says (in their Basic Online Safety Expectations 2024) that <i>every online account</i> must be linked to a phone number.<p>This is the same country that brought you "the laws of mathematics are very commendable but they don't apply in Australia".<p>I foresee a two- or three-tier Internet in the future, and Australia will probably be the first "western" country to block Tor.
> Australia will probably be the first "western" country to block Tor.<p>The Australian way would be to "ban" tor without any particular concern for enforceability or technical feasibility. Any actual blocking would be pushed onto industry somehow, which would then proceed to half-ass it, doing the absolute minimum possible to demonstrate they are complying with regulation.<p>I like Australia a lot, but a lot of the time it feels like political priority is to "make it look like something is being done". No one would actually care if the blocking worked or not unless the media made a big song and dance about it.<p>I also wonder how much of this ban is about "punishing" X and Meta in particular - Meta for it's refusal to pay for news and X because they didn't jump to immediately remove stuff the government wanted taken down.<p>> What even counts as social media?<p>Anything the government needs more leverage over or wants to shake down for money.
> The Australian way would be to "ban" tor without any particular concern for enforceability or technical feasibility. Any actual blocking would be pushed onto industry somehow, which would then proceed to half-ass it, doing the absolute minimum possible to demonstrate they are complying with regulation.<p>Just because it wouldn't be well-implemented doesn't mean it's nothing to worry about, not to mention that such things are almost always just 1 step on a path of many.
> X because they didn't jump to immediately remove stuff the government wanted taken down<p>Yes they did. X has <i>always</i> capitulated to the whims of governments.
Where in that document [1] [2] does it mention every online account must be linked to a phone number.<p>Because it mentions that an account must be linked to an email address or phone number.<p>Which would be the standard for almost every online service.<p>[1] <a href="https://www.legislation.gov.au/F2022L00062/latest/text" rel="nofollow">https://www.legislation.gov.au/F2022L00062/latest/text</a><p>[2] <a href="https://www.esafety.gov.au/sites/default/files/2024-07/Basic-Online-Safety-Expectations-regulatory-guidance-July-2024.pdf?v=1730953089680" rel="nofollow">https://www.esafety.gov.au/sites/default/files/2024-07/Basic...</a>
> every online account must be linked to a phone number<p>They seem to be learning a lot from the Chinese.
In a way it shouldn't be tied to size either, it should be tied to results. If a social company is clearly only interested in profit to the exclusion of societal benefits then they deserve to be regulated.
I would say anything with algorithmic personalized feed is a social media, and that's I would stop kids having access to.
I think the main danger is in the engagement maximisation done through these algorithmic feeds.
> What could possibly be better for children who look for purpose in life?<p>They sort of naturally do that if you have the appropriate challenges and opportunities around them.<p>> ancient pyramid aliens or discussions about how veganism will help your body.<p>There used to be a tabloid called "News of the Weird." This stuff just exists. You'll find it anywhere people gather. We're story tellers. When we don't have a compelling story we just make stuff up. It's identical to the point above.<p>> even all the adults can't come to a shared understanding of what is "good" or true.<p>It's not possible. If the children are intended to inherit the future then this is a flawed and reductive strategy. You will not achieve what you seek through parochial means.<p>Really.. I think your biggest problem should be advertising. It should be nowhere near children. Ban _that_ but keep the social media.
I think the proposal is fundamentally flawed because of this reason.<p>Facebook, Instagram and X/Twitter are probably what's intended here, but what about Tumblr, DeviantArt or Discord? What about Reddit or a generic forum? What about VRChat or Webfishing?<p>If this is about protecting children from harmful depictions of body image or misogynistic content, then why not instead propose a law that states online services that allow children to join need to appropriately moderate the content that is shown to children or could face massive fines. I don't necessarily agree with that approach, but at least it would make sense with what their stated objectives are.
Yes. Does Roblox count? What about Minecraft?
Start with the big 4 or 5 and add more in there as and when they become problematic. Who decides which one? Some agency.<p>I think the real solution is banning under 18s from having smart phones period.
I'm late to this thread, but I propose that for greater social good, people OVER 16 should be banned.
Why about Sky News?
Can't we just ban exploitative & manipulative social media instead? This is another case of the baby and the bathwater.
This is the wrong approach, to achieve the desired effect they should make social media compulsory for anyone over ~40.
Ok, they will join places like 4chan instead.
Social media companies would have a strong motivation to ensure the plan fails.
They would want it to fail to stop spreading to other countries. They could help it fail by several methods, in my opinion. Social media companies would have NO motivation to ensure its success. Maybe there needs to be a significant penalty to social media companies if the proposal fails?
The penalty for potential failure needs to be very high - such as banning a app from a country. It could be done. ( Something that would give social media companies significant motivation. ) Social media implication efforts should be examined and audited for software experts and other experts - even down to viewing every part of their software system. Actually tested by outside experts. I used to work for large gambling organisation and the companies systems where audited such by experts - they could reques tto see ANYTHING . Included their own test scenarios . included auditing of our test results. Depending on change , audit could take days . Penalty for hiding anything was so high , it was not done. Same type of system needs to apply to social media companies. I would predict they will claim it can not be done
This might force children to be social in the real world. And disabled children will remain unsocial.
Doesn't everyone lie about their age anyway, to get around the stupid restrictions?
I proposed ban on all social media politicians and government officials.
I wouldn't have said this a few years ago, but I think this is a good idea.
Also note that the government is attacking social media on a second front.<p>Last night they took advantage of the population being distracted by the US election by having an extended parliamentary session to push forward with a second reading of the controversial misinformation bill.<p>The government and state media apparatus are of course both immune from any penalty under the bill.<p><a href="https://www.abc.net.au/news/2024-09-11/acma-crackdown-social-media-disinformation-bill/104339144" rel="nofollow">https://www.abc.net.au/news/2024-09-11/acma-crackdown-social...</a>
Part of the concerted effort against social media is the (sudden?) loss of ability to control the narrative by the established power base. Politics and Print / Television Media used to have the last word on "facts", but now every man and his dog can create their own narrative.<p>Methinks they're, again, using "think of the children" as a bulwark against the inevitability of their own waning power. The more things change...<p>Can we also prevent under 16s from being exposed to religious teachings?
I really hope you sort this out.<p>When I have my masters degree finished I would like to live in your beautiful country where a lot of wonderful people live.
I would raise the limit to 20 at least
I would argue when the prefrontal lobe has matured, which is about 25 for males. These platforms are highly manipulative and causing immeasurable damage to our social fabric. A dopamine slot machine in your pocket powered by AI, ready to grab your attention anyway it can, by outraging, triggering or seducing you. I personally would ascribe these platforms as evil.
An insane idea, encroaching on the liberty we enjoy on the web.
These types of decisions should be up to parents, and, will eventually play out as they seem fit.
australia proposes big button that says "are you really old enough to read posts? please do not lie"
All of these <16s will be voting in a few years, and Australia has compulsory voting. I hope they remember this on their first visit to the ballot box.
Kids in schools which forbid phone use for everyone are often happy with the result. Banning social networks for everyone may be popular with them by the time they vote.
This will be wholly supported by both parties of our primarily two-party system. There's no alternative but a protest vote to one of those fringe parties that will never get any real power anyway.
They’re protecting the children.
I, for one, am all in favour of our new "summary execution of children if they access social media" law.<p>Bring on the brown shirts! We must protect family values at all cost!
Yeah I’m not against it. Please EU follow suit.<p>What really needs to happen is tough regulators digging through the algorithms. Why are boys on YouTube getting served so much manosphere crap? Etc.<p>We have enough studies about what these algorithms are trying to do to people to keep them engaged. It’s not healthy for society.
The algorithm is pretty simple: show random videos, select categories that invoke the most reaction, show more random videos from those categories and so on. That's why the algorithm often converges to a local maximum and keeps showing disturbing shit on one topic.
Honestly that does not seem enough. Honestly I'm worried about the entire age range of human beings in this regard, not just children.
Time for some "right wing extremists" to win a landslide victory and send that administration packing.<p>Hopefully, this is an issue that will make Australians finally wake up fron their hypnosis...
I fully support this.<p>Fuck google, meta, tiktok, all of them, they are ruining the world.<p>And fuck amazon too.
Another development that makes me happy for the US 1st Amendment.
Ahh yes, government knows what is best for you.
[dead]
[dead]
[dead]
[dead]
lol, probably because they have a propaganda leak.
Such a shame of a news, sad to see the level of Australian govt when they are trying to ban for "safety" in the year 2024. Good thing one smart Digital Industry Group representative already told them they are not thinking straight. Of course giving young people something better and more exciting thing to do is not in their plans.<p>In their dreams is to BAN, take ID VERIFICATIONS and FINE private companies for the rest of their days, any Australian should be ashamed of such dull and unsophisticated policies, and BRAVO to Sunita Bose.
Completely unenforceable
It's so sad seeing how many people think its on the government to raise kids.
The main reason for this ban, like the attempted TikTok ban in the US, is the overwhelmingly anti-zionist views of the younger generation due to repeated social media exposure to the genocide in Gaza. Fundamentally social media facilitates the faster and broader spread of information than ever before in human history, which is a threat to the gatekeepers who for decades have tightly controlled what information the people of Australia have access to.