Can’t say I blame them.
Better I give a little bit of pii than some kid grows up too early.
Would you be able to tell the difference if this policy came from a place of compassion?
https://www.intelligence.gov.au/news/asio-annual-threat-asse...
That said, a better approach would be to limit kids under certain age from owning smartphones with full internet access. Instead, they could have a phone without internet access—dumb phones—or ones with curated/limited access.
Personally, I'm not too worried about what risqué stuff they'll see online especially so teenagers (they'll find that one way or other) but it's more about the distraction smartphones cause.
Thinking back to my teenage years I'm almost certain I would have been tempted to waste too much time online when it would have been better for me to be doing homework or playing sport.
It goes without saying that smartphones are designed to be addictive and we need to protect kids more from this addiction than from from bad online content. That's not to say they should have unfettered access to extreme content, they should not.
It seems to me that having access to only filtered IP addresses would be a better solution.
This ill-considerd gut reaction involving the whole community isn't a sensible decision if for no other reason than it allows sites like Google to sap up even more of a user's personal information.
And at no point does it ever occur to you to demand proof that measures such as this will have the desired effect... or, indeed, that the desired effect is indeed worth achieving at all.
2027: the companies providing the logins must provide government with the identities
2028: because VPNs are being used to circumvent the law, if the logging entity knows you're an Australian citizen, even if you're not in Australia or using an Aussie IP address then they must still apply the law
2030: you must be logged in to visit these specific sites where you might see naked boobies, and if you're under age you can't - those sites must enforce logins and age limits
2031: Australian ISPs must enforce the login restrictions because some sites are refusing to and there are loopholes
2033: Australian ISPs must provide the government with a list of people who visited this list of specific sites, with dates and times of those visits
2035: you must be logged in to visit these other specific sites, regardless of your age
2036: you must have a valid login with one of these providers in order to use the internet
2037: all visits to all sites must be logged in
2038: all visits to all sites will be recorded
2039: this list of sites cannot be visited by any Australian of any age
2040: all visits to all sites will be reported to the government
2042: your browser history may be used as evidence in a criminal case
Australian politicians, police, and a good chunk of the population would love this.
Australia is quietly extremely authoritarian. It's all "beer and barbies on the beach" but that's all actually illegal.
While I yearn for the more authentic and sincere days of the internet I grew up on, I recognize very quickly by visiting x or facebook how much it isn’t that, and hasn’t been for a long time.
I think this bill is a good thing and I support it.
We already reached that point several years ago.
I agree though, most information is misinformation, even the most popular stuff, Joe Rogan et al.
I really wish all this time, effort, and money was spent on educating our kids to safely navigate the online world.
It's not like they'll magically figure it out for themselves once they turn 17.
I don't see kids being banned from reading history books, which would be more like the world you're describing, I see a country which is pretty multicultural and open minded trying it's best to protect itself from the absolute nonsense that circulates online. When I was a kid, I could only watch certain TV shows because my bed time was 7:30-8pm, that's when the "naughty stuff" came on TV. Was that the ministry of truth at work?
Do you have any idea what kids are exposed to now ? I mean the answer is probably, no, you have no idea. But judging by the rot I see my younger friends and family members watch and regurgitate, I can tell you, it's not great.
Nothing screams "fear mongering" like comparing with living in Soviet Russia.
Look, we can argue all day. There is no right or wrong answer. I don't fully support the govts initiative but I also don't want Meta/X/Google to have unlimited powers like they do in the US.
You probably should have started your censorship campaign with the usual bugaboos -- comics, video games, porno mags -- and not with history books.
The UK PM and the AU PM backed the US position and sent troops in (in the AU case they even sent in advance rangers | commandos | SASR to scout and call targets from ground) but they were both aware the "justification" and WMD claims were BS.
At the time it was obvious to many astute observers what was happening but governments themselves were mesmerized and awed by Big Tech.
A 20-plus year delay in applying regulations means it'll be a long hard road to put the genie back in tbe bottle. For starters, there's too much money now tied up in these trillion-dollar companies, to disrupt their income would mean shareholders and even whole economies would be affected.
Fixing the problem will be damn hard.
Some states in the US are doing this already. And I think I saw a headline about some country in Europe trying to put Twitter in that category, implying they have such rules there already.
https://www.greenleft.org.au/content/halliburton-australia-p...
Same here. Early on, if I found a site interesting I'd often follow its links to other sites and so on down into places that the Establishment would deem unacceptable but I'd not worry too much about it.
Nowadays, I just assume authorities of all types are hovering over every mouse click I make. Not only is this horrible but it also robbs one of one's autonomy.
It won't be long before we're handing info that was once commonplace in textbooks around in secret.
(It may be the last thing that the US has the world lead on)
It's also why legislation protecting privacy and/or preventing the trade of personal information is almost impossible: the "right" people profit from it, and the industry around it has grown large enough that it would have non-trivial economic effects if it were destroyed (no matter how much it thoroughly deserves to be destroyed with fire).
Apologies. I'm already pretty morose over the USA Supreme Court allowing age verification, which although claiming to target porn seems so likely to cudgel any "adult" or sexual material at all.
Until recently the Declaration of Independence of Cyberspace has held pretty true. The online world has seen various regulations but mostly it's been taxes and businesses affected, and here we see a turn where humanity is now denied access by their governments, where we are no longer allowed to connect or to share, not without flashing our government verified id. It's such a sad lowering of the world, to such absolutely loser politicians doing such bitter pathetic anti governance for such low reasons. They impinge on the fundamental dignity & respect inherent on mankind here, in these intrusions into how we may think and connect.
Links for recent Texas age verification: https://www.wired.com/story/us-supreme-court-porn-age-verifi... https://news.ycombinator.com/item?id=44397799
It seems quite likely that governments want to continuously chip away at privacy.
That would have the same effect.
> Drafting of the code was co-led by Digital Industry Group Inc. (DIGI), which was contacted for comment as it counts Google, Microsoft, and Yahoo among its members.
Yes, right now search engines are only going to blur out images and turn on safe search, but the decision to show or hide information in safe search has alarming grey areas.
Examples of things that might be hidden and which someone might want to access anonymously are services relating to sexual health, news stories involving political violence, LGBTQ content, or certain resources relating to domestic violence.
Been ongoing for a while now: https://roncobb.net/img/cartoons/aus/k5092-on-Tucker_Box-cuu...
This view is manufactured. The premise is that better moderation is available and despite that, literally no one is choosing to do it. The fact is that moderation is hard and in particular excluding all actually bad things without also having a catastrophically high false positive rate is infeasible.
But the people who are the primary victims of the false positives and the people who want the bad stuff fully censored aren't all the same people, and then the second group likes to pretend that there is a magic solution that doesn't throw the first group under the bus, so they can throw the first group under the bus.
Various large US tech companies played a central role in drafting this initiative. I don't think you're reasoning about this clearly.
How exactly does this curtail their powers?
It’s worse than that. Companies actively refuse to do anything about content that is reported to them directly, at least until the media kicks up a stink.
Nobody disputes that reliably detecting bad content is hard, but doing nothing about bad content you know about is inexcusable.
This wouldn't allow them to watch gambling ads or enjoy murdoch venues.
This has lead to serious problems in the case of the Afghan war, where it was clear that this whole conflict had nothing to do with Australia, could not even vaguely be construed as "defence", achieved nothing, cost Australian lives, and was a completely fabricated mess that we got into for really bad reasons (I paraphrase). The SAS war crimes thing was a symptom of our unease at our involvement (imho) - we would not normally question the things that soldiers do in conflict, this was more a way of questioning why we were in the conflict in the first place.
The actual goal is, as always, complete control over what Australians can see and do on the internet, and complete knowledge of what we see and do on the internet.
> Meta said it has in the past two years taken down 27 pedophile networks and is planning more removals.
Moreover, the rest of the article is describing the difficulty in doing moderation. If you make a general purpose algorithm that links up people with similar interests and then there is a group of people with an interest in child abuse, the algorithm doesn't inherently know that and if you push on it to try to make it do something different in that case than it does in the general case, the people you're trying to thwart will actively take countermeasures like using different keywords or using coded language.
Meanwhile user reporting features are also full of false positives or corporate and political operatives trying to have legitimate content removed, so expecting them to both immediately and perfectly respond to every report is unreasonable.
Pretending that this is easy to solve is the thing authoritarians do to justify steamrolling innocent people because nobody can fully eliminate the problem nobody has any good way to fully eliminate.
Manufactured by whom? Moderation was done very tightly on vbulletin forums back in the day, the difference is Facebook/Google et al expect to operate at a scale where (they claim) moderation can't be done.
The magic solution is if you can't operate at scale safely, don't operate at scale.
I don’t know where you got that from. Meta’s self-congratulatory takedown of “27 pedophile networks” is a drop in the ocean.
Here’s a fairly typical example of them actively deciding to do nothing in response to a report. This mirrors my own experience.
> Like other platforms, Instagram says it enlists its users to help detect accounts that are breaking rules. But those efforts haven’t always been effective.
> Sometimes user reports of nudity involving a child went unanswered for months, according to a review of scores of reports filed over the last year by numerous child-safety advocates.
> Earlier this year, an anti-pedophile activist discovered an Instagram account claiming to belong to a girl selling underage-sex content, including a post declaring, “This teen is ready for you pervs.” When the activist reported the account, Instagram responded with an automated message saying: “Because of the high volume of reports we receive, our team hasn’t been able to review this post.”
> After the same activist reported another post, this one of a scantily clad young girl with a graphically sexual caption, Instagram responded, “Our review team has found that [the account’s] post does not go against our Community Guidelines.” The response suggested that the user hide the account to avoid seeing its content.
> 2038: all visits to all sites will be recorded
That's been the case since 2015. ISPs are required to record customer ID, record date, time and IP address and retain it for two years to be accessed by government agencies. It was meant to be gated by warrants, but a bunch of non-law-enforcement entities applied for warrantless access, including local councils, the RSPCA (animal protection charity), and fucking greyhound racing. It's ancient history, so I'm not sure if they were able to do so. The abuse loopholes might finally be closed up soon though.
https://privacy108.com.au/insights/metadata-access/
https://delimiter.com.au/2016/01/18/61-agencies-apply-for-me...
https://www.abc.net.au/news/2016-01-18/government-releases-l...
https://ia.acs.org.au/article/2023/government-acts-to-finall...
https://en.wikipedia.org/wiki/Manufacturing_Consent
> Moderation was done very tightly on vbulletin forums back in the day, the difference is Facebook/Google et al expect to operate at a scale where (they claim) moderation can't be done.
The difference isn't the scale of Google, it's the scale of the internet.
Back in the day the internet was full of university professors and telecommunications operators. Now it has Russian hackers and an entire battalion of shady SEO specialists.
If you want to build a search engine that competes with Google, it doesn't matter if you have 0.1% of the users and 0.001% of the market cap, you're still expected to index the whole internet. Which nobody could possibly do by hand anymore.
I guess if a teenager is enterprising enough to get a job and save up and buy their own devices and pay for their own internet then more power to them.
As mentioned, the issue is that they get zillions of reports and vast numbers of them are organized scammers trying to get them to take down legitimate content. Then you report something real and it gets lost in an sea of fake reports.
What are they supposed to do about that? It takes far fewer resources to file a fake report than investigate one and nobody can drink the entire ocean.
Edit: you can’t just grow a Wikipedia link to manufacturing consent from the 80s as an explanation here. What a joke of a position. Maybe people have been hoodwinked by a media conspiracy or maybe they just don’t like what the kids are exposed to at a young age these days.
p.s. i agree with your comment.
Block lists are not new. For example Italy blocks a number of sites, usually at DNS level with the cooperation of ISPs and DNS services. You can autotranslate this article from 2024 to get the gist of what is being blocked and why https://www.money.it/elenco-siti-vietati-italia-vengono-pers...
I believe other countries of the same area block sites for similar reasons.
Do you dispute the thesis of the book? Moral panics have always been used to sell both newspapers and bad laws.
> Maybe people have been hoodwinked by a media conspiracy or maybe they just don’t like what the kids are exposed to at a young age these days.
People have never liked what kids are exposed to. But it rather matters whether the proposed solution has more costs than effectiveness.
> Maybe search is dead but doesn’t know it yet.
Maybe some people who prefer the cathedral to the bazaar would prefer that. But ability of the public to discover anything outside of what the priests deign to tell them isn't something we should give up without a fight.
I put it to you, similarly without evidence, that your support for unfettered filth freedom is the result of a process of manufacturing consent now that American big tech dominates.
Not a convincing take.
If the system is pathologically unable to deal with false reports to the extent that moderation has effectively ground to a standstill perhaps the regulator ought to get involved at that point and force the company to either change its ways or go out of business trying?
It seems like it would make more sense to implement it at the browser level. Let the website return a header (ala RTA) or trigger some JavaScript API o indicate that the browser should block the tab until the user verifies their age.
This seems out of place and unrelated. If anything Gen Z and presumable Alpha, eventually, are more religious than their parents.
As but one possible example. Common infrastructure to handle whitelisting would probably go a long way here. Just being able to tag a phone, for example, as being possessed by a minor would enable all sorts of voluntary filtering with only minimal cooperation required.
Many sites already have "are you 18 or older" type banners on entry. Imagine if those same sites attached a plaintext flag to all of their traffic so the ISP, home firewall, school firewall, or anyone else would then know to filter that stream for certain (tagged) accounts.
I doubt that's the best way to go about it but there's so much focus on other solutions that are more cumbersome and invasive so I thought it would be interesting to write out the hypothetical.
In the days before electronics were endemic, physically checking a photo ID didn't run afoul of that as long as the person checking didn't record the serial number. But that's no longer the world we live in.
It isn’t. For as long as I can remember it’s been wildly authoritarian, and it seems Australians harbour a fetish for the rules that would make even the average German blush.
Hopefully times have changed (though I don’t think they have), but about 20 years ago, standard fare on the road was to provide essentially no driver training, and then aggressively enforce draconian traffic rules. New drivers can’t drive at night. New drivers have to abide by lower speed limits than other drivers. Police stop traffic for random breathalyser tests. “Double demerit” days…
This seems like more of the same. Forget trying to educate the population about the dangers of free access to information (which they will encounter anyway). Just go full Orwell! What could go wrong!
Unrelated, but why I don't agree:
The systems which permit voting down stupid laws also permit voting down good laws. This is very "be careful what you wish for" and reductive to "the voter is always right even when they want stupid things" interpretation of democracy.
E.g. Swiss cantons opposing votes for women inside the last 2 decades.
Most legislation aims to create the offence of misleading, not actually stamp out 100% of offenders. Kids who get round this will make liabilities for themselves and their parents.
IMO an "ok" solution to the parents' requirements of "I want my kids to not watch disturbing things" might be to enforce domain tags (violence, sex, guns, religion, social media, drugs, gambling, whatever) and allow ISPs to set filters per paying client, so people don't have to setup filters on their own (but they can).
But it's a complex topic, and IMO a simpler solution is to just not let kids alone in the internet until you trust them enough.
Seems like right now the Aus Government isn't sure how they want it to work and is currently trialing some things. But it does seem like they at least don't want social media sites collecting ID.
E.g. if you produce eggs and you can't avoid salmonella at some point your operation should be shut down.
Facebook and its ilk have massive profits, they can afford more moderators.
Meanwhile morals panics are at least as old as the Salem Witch Trials.
As others have said, that's the case already and not just in Australia. Same in lots of other places like the UK and the whole EU. Less so in the US (though they can demand any data the ISP has, and require ISPs to collect data on individuals)
> Australia is quietly extremely authoritarian.
It is weird, as a recent-ish migrant I do agree, there are rules for absolutely bloody everything here and the population seems in general to be very keen on "Ban it!" as a solution to everything.
It's also rife with regulatory capture - Ah, no mate, you can't change that light fitting yourself, gotta get a registered sparky in for that or you can cop a huge fine. New tap? You have to be kidding me, no, you need a registered plumber to do anything more than plunger your toilet, and we only just legalised that in Western Australia last year.
It's been said before, but at some point the great Aussie Larrikin just died. The Wowsers won and most of them don't even know they're wowsers.
How can you argue any of this is NOT in the interest of centralised surveillance and advertising identities for ADULTS when there’s such an easy way to bypass the regulation if you’re a child?
Moderation is hard when you prioritise growth and ad revenue over moderation, certainly.
We know a good solution - throw a lot of manpower at it. That may not be feasible for the giant platforms...
Oh no.
This isn't evidence that they have a system for taking down content without a huge number of false positives. It's evidence that the previous administrators of Twitter were willing to suffer a huge number of false positives around accusations of racism and the current administrators are willing to suffer them around accusations of underaged content.
By this principle the government can't operate the criminal justice system anymore because it has too many false positives and uncaptured negative externalities and then you don't have anything to use to tell Facebook to censor things.
> Facebook and its ilk have massive profits, they can afford more moderators.
They have large absolute profits because of the large number of users but the profit per user is in the neighborhood of $1/month. How much human moderation do you think you can get for that?
That is true. I spent my time coding a 2D game engine on an 486, it eventually went nowhere, but it was still cool to do. But if I had the internet then, all that energy would have been put into pointless internet stuff.
The worst content out there is typically data-heavy, the best - not necessarily, as it can well be text in most cases.
It pushes for heavy content filtering, age checks, and algorithm tweaks to hide certain results. That means more data tracking and less control over what users see. Plus, regulators can order stuff to be removed from search results, which edges into censorship. Sets the stage for broader control, surveillance, and over-moderation. slowburn additions all stack up. digital ID ,NBN monopoly ISP locked DNS servers . TR-069 etc etc. Hidden VOIP credentials. Australia is like the west's testing ground this kind of policy it seams.
I would like to say "It is all because of X political party!" but both the majors are the same in this regard and they usually vote unanimously on these things.
Do like banks: Know Your Customer. If someone performs a crime using your assets, you are required to supply evidence to the police. You then ban the person from using your assets. If someone makes false claims, ban that person from making reports.
Now your rate of false positives is low enough to handle.
seams like long term slow burn to Gov tendrils just like digital ID and how desperate the example came across as to show any real function, contradictory even.
Pivot, what about the children. small steps and right back on the gradient of slippyslope we are
Afterwards the same people who employed this rhetoric claimed they, "Always knew the claims were false".
There was definite risk of loss of political capital for would be dissenters. Politicians may or may not have had skeptical reservations. It is moot point if they didn't proactively dissent. Similarly, it isn't especially meaningful in the context of this discussion if those who did dissent were locked out of popular media discourse. The overall media environment repeated the claims unquestioningly. Dissent was maligned as conspiracy theory.
Another interesting manifestation were those who claimed that WMDs were found. Clearly the goal posts were shifted here. Between those who were "always suspicious" and those who believe that the standards of WMDs were met, very few people remain who concede that they were hoodwinked by the propaganda narrative. Yet at the same time, it isn't a stretch to observe that a war or series of wars was started based on false premises. No one has been held to account.
My contention is more that they don’t have the will, because it would impact profits and that it’s possible that if they did implement effective moderation at scale it might hurt their bottom line so much they are unable to keep operating.
Further, that I would not lament such a passing.
I’m not saying tiny forums are some sort of panacea, merely that huge operations should not be able to get away with (for example) blatant fraudulent advertising on their platforms, on the basis that “we can’t possibly look at all of it”.
Find a way, or stop operating that service.
In the context of Australia objecting to lack of moderation I'm not sure it matters. It seems reasonable for a government to set minimum standards which companies that wish to operate within their territory must abide by. If as you claim (and I doubt) the current way of doing things is uneconomical under those requirements then perhaps it would be reasonable for those products to be excluded from the Australian market. Or perhaps they would instead choose to charge users for the service? Either outcome would make room for fairly priced local alternatives to gain traction.
This seems like a case of free trade enabling an inferior American product to be subsidized by the vendor thereby undercutting any potential for a local industry. The underlying issue feels roughly analogous to GDPR except that this time the legislation is terrible and will almost certainly make society worse off in various ways if it passes.
Why is this even controverse. Is there any rational reason why kids should have smartphones? The only reason I see is to let the big companies earn money and because adults don't want to admit, that they are addicted themselves.
Not quietly, I don't think. Not like Australia is known for freedom and human rights. It's known for expeditionary wars, human rights abuses, jailing whistleblowers and protesters, protecting war criminals, environmental and social destruction, and following the United States like a puppy.
Obviously we make case by case decisions regarding such things. There are plenty of ways in which governments could act that populations in the west generally deem unacceptable. Private prisons in the US, for example, are quite controversial at present.
It's worth noting that if the regulator actually enforces requirements then they become merely a cost of doing business that all participants are subject to. Such a development in this case could well mean that all the large social platforms operating within the Australian market start charging users in that region on the order of $30 per year to maintain an account.
But also, your proposal would deter people from reporting crimes because they're not only hesitant to give randos or mass surveillance corporations their social security numbers, they may fear retaliation from the criminals if it leaks.
And the same thing happens for people posting content -- identity verification is a deterrent to posting -- which is even worse than a false positive because it's invisible and you don't have the capacity to discover or address it.
Some times, but clearly not often enough.
Does a refusal get more active than a message that says “Our review team has found that [the account’s] post does not go against our Community Guidelines”?
> Then you report something real and it gets lost in an sea of fake reports.
It didn’t get ‘lost’ — they (or their contract content moderators at Concentrix in the Phillipines) sat on it, and then sent a message that said they had decided to not do anything about it.
> What are they supposed to do about that?
They’ve either looked at the content and decided to do nothing about it, or they’ve lied when they said that they had, and that it didn’t breach policy. Which do you suppose it was?
Typically you would exempt smaller services from such legislation. That's the route Texas took with HB 20.
It is in combination with the high rate of false positives, unless you think the false positives were intentional.
> If as you claim (and I doubt) the current way of doing things is uneconomical under those requirements then perhaps it would be reasonable for those products to be excluded from the Australian market.
If they actually required both removal of all offending content and a low false positive rate (e.g. by allowing customers to sue them for damages for removals of lawful content) then the services would exit the market because nobody could do that.
What they'll typically do instead is accept the high false positive rate rather than leave the market, and then the service remains but becomes plagued by innocent users being victimized by capricious and overly aggressive moderation tactics. But local alternatives couldn't do any better under the same constraints, so you're still stuck with a trash fire.
What you describe is more like the debate on continental Europe, which translated in little support (most countries provided help with logistics and minimal "peacekeeping").
…
That's assuming their "review team" actually reviewed it before sending that message and purposely chose to allow it to stay up knowing that it was a false negative. But that seems pretty unlikely compared to the alternative where the reviewers were overwhelmed and making determinations without doing a real review, or doing one so cursory the error was done blind.
> They’ve either looked at the content and decided to do nothing about it, or they’ve lied when they said that they had, and that it didn’t breach policy. Which do you suppose it was?
Almost certainly the second one. What would even be their motive to do the first one? Pedos are a blight that can't possibly be generating enough ad revenue through normal usage to make up for all the trouble they are, even under the assumption that the company has no moral compass whatsoever.
And for me it was a place to explore my passions way better than any library in a small city in Poland would allow.
And sure - also a ton of time on internet games / MUDs, chatrooms etc.
And internet allowed me to publish my programs, written in Delphi, since I was 13-14yo, and meet other programmers on Usenet.
On the other hand, if not for internet, I might socialise way more irl - probably doing thing that were way less intelectually developing (but more socially).
It just hit me that I need to ask one of my friends from that time what they did in their spare time, because I honestly have no idea.
This would be completely and utterly unenforceable in any capacity. Budget smartphones are cheap enough and ubiquitious enough that children don't need your permission or help to get one. Just as I didnt need my parents assistance to have three different mobile phones in high school when as far as they knew, I had zero phones.
You can make case by case decisions regarding individual aspects of the system, but no modern criminal justice system exists that has never put an innocent person behind bars, much less on trial. Fiddling with the details can get you better or worse but it can't get you something that satisfies the principle that you can't operate if you can't operate without ever doing any harm to anyone. Which implies that principle is unreasonable and isn't of any use in other contexts either.
> It's worth noting that if the regulator actually enforces requirements then they become merely a cost of doing business that all participants are subject to. Such a development in this case could well mean that all the large social platforms operating within the Australian market start charging users in that region on the order of $30 per year to maintain an account.
The premise there is that you could solve the problem for $30 per person annually, i.e. $2.50/month. I'm left asking the question again, how much human moderation do you expect to get for that?
Meanwhile, that's $30 per service. That's going to increase the network effect of any existing service because each additional recurring fee or requirement to submit payment data is a deterrent to using another one. And maybe the required fee would be more than that. Are you sure you want to entrench the incumbents as a permanent oligarchy?
You don’t get that notification show up when you buy alcohol or cigarettes at a shop, would have been easier being a minor buying beer. The porn companies know what they are doing or they would create a adults robots.txt and published a RFC. Hope they won’t ask for age verification for the shroomery
Uhuh.
>I’m an Australian who values privacy and civil liberties more than most I meet.
No you're not.
The eSafety commissioner is an American born ex-Microsoft, Adobe and Twitter employee who was appointed by the previous conservative government. I wouldn't be so sure her values are representative of the so-called Australian nanny state or the Australian Labor Party.
Is the theory supposed to be that the moderation would cost them users, or that the cost of paying for the moderation would cut too much into their profits?
Because the first one doesn't make a lot of sense, the perpetrators of these crimes are a trivial minority of their user base that inherently cost more in trouble than they're worth in revenue.
And the problem with the second one is that the cost of doing it properly would not only cut into the bottom line but put them deep into the red on a permanent basis, and then it's not so much a matter of unwillingness but inability.
> I’m not saying tiny forums are some sort of panacea, merely that huge operations should not be able to get away with (for example) blatant fraudulent advertising on their platforms, on the basis that “we can’t possibly look at all of it”.
Should the small forums be able to get away with it though? Because they're the ones even more likely to be operating with a third party ad network they neither have visibility into nor have the leverage to influence.
> Further, that I would not lament such a passing.
If Facebook was vaporized and replaced with some kind of large non-profit or decentralized system or just a less invasive corporation, would I cheer? Probably.
But if every social network was eliminated and replaced with nothing... not so much.
The ruling class in the west are generally extremely anti-religious. They have a good reason to be - the biggest religion in the west is anti-wealth (the "eye of the needle" things etc.) and generally opposed to the values of the powerful.
The US is a sort of exception, but they say things to placate the religious (having already been pretty successful in manipulating and corrupting the religion) but very rarely actually do anything. I very much doubt the president (or anyone else) in the current US government is going to endorse "give all you have to the poor".
You are wrong to blame the Internet (or today LLMs). Do not blame the tool.
Sure I consumed sex when I was a kid, but I did a fuckton of coding of websites (before JavaScript caught up, but in JavaScript) and modding of games. I met lots of interesting, and smart people on IRC with mutual hobbies and so forth. I did play violent games, too, just FYI, when I was not making mods for them.
This one. Not just in terms of needing to take on staff, but it would also cut into their bottom line in terms of not being able to take money from bad-faith operators.
> And the problem with the second one is that the cost of doing it properly would not only cut into the bottom line but put them deep into the red on a permanent basis, and then it's not so much a matter of unwillingness but inability.
Inability to do something properly and make a commercial success of it, is a 'you' problem.
Take meta and their ads - they've built a system in which it's possible to register and upload ads and show them to users, more or less instantly with more or less zero human oversight. There are various filters to try and catch stuff, but they're imperfect, so they supply fraudulent ads to their users all the time - fake celebrity endorsements, various things that fall foul of advertising standards. Some just outright scams. (Local family store you never heard of is closing down! So sad! Buy our dropshipped crap from aliexpress at 8x the price!)
To properly, fully fix this they would need to verify advertisers and review ads before they go live. This is going to slow down delivery, require a moderate sized army of reviewers and it's going to lose them revenue from the scammers. So many disincentives. So they say "This is impossible", but what they mean is "It is impossible to comply with the law and continue to rake in the huge profits we're used to". They may even mean "It is impossible to comply with the law and continue to run facebook".
OK, that's a classic 'you' problem. (Or it should be). It's not really any different to "My chemical plant can't afford to continue to operate unless I'm allowed to dump toxic byproducts in the river". OK, you can't afford to operate, and if you keep doing it anyway, we're going to sanction you. So ... Bye then?
> Should the small forums be able to get away with it though?
This is not really part of my argument. I don't think they should, no. But again - if they can't control what's being delivered through their site and there's evidence it contravenes the law, that's a them problem and they should stop using those third party networks until the networks can show they comply properly.
> if every social network was eliminated and replaced with nothing... not so much.
Maybe it's time to find a new funding model. It's bad enough having a funding model based on advertising. I's worse having one based on throwing ad messages at people cheap and fast without even checking they meets basic legal standards. But here we are.
I realise this whole thing is a bit off-topic as the discussion is about age-verification and content moderation, and I've strayed heavily into ad models....
Why? If you read the original legislation https://parlinfo.aph.gov.au/parlInfo/search/display/display....
You get 30,000 civil penalty units if you are a scumbag social media network and you harvest someone's government ID. You get 30,000 civil penalty units if you don't try to keep young kids away from the toxic cesspool that is your service, filled with bots and boomers raving about climate change and reposting Sky News.
This absolutely stuffs those businesses who prey on their users, at least for the formative years.
And when I think about it like that? I have no problem with it, nor the fact it's a pain to implement.
Read it. It is specifically targeting companies who currently run riot over young individual's digital identity, flog it off to marketers, and treat them as a product.
Read the legislation. Ask yourself if it's better for a country's government or a foreign set of social media companies to control what young people see. One has a profit motive above all else. One can be at least voted for or against.
Read the bill. Gov ID collection is just as much a violation as failing to take any action
The reasoning is often “people might contaminate the water supply for a whole street!” Which just points to poor provision of one way valves at the property line.
But yeah, illegal.
I agree there are limits with what you want to do on electricity, but turning the breaker off and replacing a light fitting or light switch is pretty trivial. And I know people do just get on with it and do some of this stuff themselves anyway.
Was particularly pissed off that in January this year the plumbing “protections” were extended to rural residents who aren’t even connected to mains water or sewage, to protect us from substandard work by … making it illegal for us to do it ourselves. Highly annoying.
Literally right there in the bill, showing your papers and the company collecting is just as much of an offense as them doing nothing to stop kids from being run through the misinformation mill.
The framing that explicit material is bad for kids, while probably true, is besides the point. Lots of things a parent could expose a child to could be bad, but it's always been seen as up to the parent to decide.
What the government should do is ensure that parents have the tools to raise their kids in the way they feel is appropriate. For example, they could require device manufactures implement child-modes or that ISP provide tools for content moderation which would puts parents in control. This instead places the the state in the parental role with it's entire citizenry.
We see this in the UK a lot too. This idea that parents can't be trusted to be good parents and that people can't be trusted with their own freedom so we need the state to look after us seems to be an increasing popular view. I despise it, but for whatever reason that seems to be the trend in the West today – people want the state to take on a parental role in their lives. Perhaps aging demographics has something to do with it.
Oh how convenient.
It will also make it harder for the grubby men in their 30s and 40s to groom 14yo girls on Snapchat, which is a bonus.
Not really sure what this has to do with the Australian government or Australian people. We can't even properly tax these foreign companies fairly. If we did try to regulate them the US government would step in and play the victim despite a massively one sided balance of trade due to US services being shoved down our throats. We need to aggressively pursue digital sovereignty.
It's authoritarianism, and frankly paving the way for fascism. People are already getting visits from the police for unsavory Facebook posts. Be careful not to criticize your government online, because soon every post will be instantly judged by an AI system and you'll be flagged as a disobedient citizen in need of a bit of the old boot.
Might want to wind back that Aussie Exceptionalism a notch or three. That or read up a little more.
The last one is difficult because you need a common standard, either someone becomes a monopoly (or two or three quasi-monopolies such as google/apple) or better still this is one of few cases where government regulation could do more good than harm.
I think China is already close to the last phase at least in cities, going down the government regulated route?
This is highly country dependent of course - in some places shops must accept coins by law, even if it's so unusual that you have to roll a critical success to get the right amount of change back.
I would like a world where we can give children physical pocket money rather than some abstraction, and they don't need a smartphone of their own to check their balance. But we'll probably have to fight for that at some point.
Its more of a constantly lowering bar, not a slippery slope that just needs to be stopped once.
Or in words you might find more appealing: Its worse than a slippery slope.
> your browser history may be used as evidence in a criminal case
Already the case. Mostly for the kind of dumb criminal who is suspected of murder and has been found googling "defences to murder" and "how to hide a body".
> the companies providing the logins must provide government with the identities
If there's a court order (good) or a national security letter (occasionally good but very open to abuse). Maybe the NSA or some guy in DOGE has automatic API access to this data anyway.
> you must be logged in to visit these specific sites where you might see naked boobies, and if you're under age you can't - those sites must enforce logins and age limits
Already the case for youtube and reddit content marked NSFW - either by the creator or by a fairly stupid algorithm. (You can see these boobies, but not those ones.) But the age verification is mostly "open a new account and enter a birth date". Also reddit has the dumbest age verification/login bypass ever. (Your honor, editing an URL is nation-state level hacking and we can't reasonably defend against that.)
> all visits to all sites will be recorded
Something something Permanent Record.
> you must have a valid login with one of these providers in order to use the internet
Ok this one is cheating a bit, but don't you need a google (or samsung etc.) account to set up an android let alone access the internet?
Also cheating a bit but you need a login and contract with your ISP to get on the internet too.
Even if you could stop phones, you wont stop them from accessing it from literally a near infinite supply of other devices.
It's pure and utter fantasy.
Total rort.
Why should this be the government's responsibility rather than the parents'?
Sure, there was a lot of dicking around, but overall it was positive.
Notice that the goalposts shifted subtly from moderation of disallowed content to distribution of age restricted content. The latter isn't amendable to size based criteria for obvious reasons.
Note that I don't think the various ID laws are good ideas. I don't even think they're remotely capable of accomplishing their stated goals. Whereas I do expect that it's possible to moderate a given platform decently well if the operator is made to care.
For the same reason that the government limits smoking and alcohol. Because the parents can't/won't.
I expect you can get quite a bit of moderation for that price. If a given user is exceeding that then they are likely so problematic that you will want to ban them anyway. Speaking from personal experience, the vast majority of users never act in a way that requires attention in the first place.
If the law discriminates on size you don't end up with (or at least exacerbate) the oligarchy scenario. In fact it acts to counter network effects by economically incentivizing use of the smaller services.
But for a phone? A child/early-teen shouldn't be able to afford a phone nor contract with a cellphone-service-provider being underage. That should be collaboration enough. If they got a phone beforehand, it's because the parents themselves got it for them.
Even considering a mid teen starting work, buying a phone and using it with WiFi, they can only really own things with the parents' approval. They can't really use it enough to form an addiction without the parents noticing and having the opportunity to confiscate it.
I assume that in your post "WA" means Western Australia -- as I can't imagine this kind of absurd protectionism law flying in Washington state, even though it's a little more paternalistic than average for the US.
It's plausible that it wasn't what some of the supporters intended, but that was the result, and the result wasn't entirely unpredictable. And it plausibly is what some of the supporters intended. When PornHub decided to leave Texas, do you expect they counted it as a cost or had a celebration?
> Notice that the goalposts shifted subtly from moderation of disallowed content to distribution of age restricted content. The latter isn't amendable to size based criteria for obvious reasons.
Would the former be any different? Sites over the threshold are forced to do heavy-handed moderation, causing them to have a significant competitive disadvantage over sites below the threshold, so then the equilibrium shifts to having a larger number of services that each fit below the threshold. Which doesn't even necessarily compromise the network effect if they're federated services so that the network size is the set of all users using that protocol even if none of the operators exceed the threshold.
> Note that I don't think the various ID laws are good ideas. I don't even think they're remotely capable of accomplishing their stated goals. Whereas I do expect that it's possible to moderate a given platform decently well if the operator is made to care.
I'm still not clear on how they're supposed to do that.
The general shape of the problem looks like this:
If you leave them to their own devices, they have the incentive to spend a balanced amount of resources against the problem, because they don't actually want those users but it requires an insurmountable level of resources to fully shake them loose without severely impacting innocent people. So they make some efforts but those efforts aren't fully effective, and then critics point to the failures as if the trade-off doesn't exist.
If you require them to fully stamp out the problem by law, they have to use the draconian methods that severely impact innocent people, because the only remaining alternative is to go out of business. So they do the first one, which is bad.
The principle was that if you can't operate without doing harm, you can't operate.
But then nobody can operate, including the government.
If you give up that absolutist principle and concede that there are trade offs in everything, that's the status quo and there's nothing to fix. They already have the incentive to spend a reasonable amount of resources to remove those users, because they don't want them. The unfortunacy is that spending a reasonable amount of resources doesn't fully get rid of them, and spending an unreasonable amount of resources (or making drastic trade offs against false positives) is unreasonable.
> I expect you can get quite a bit of moderation for that price. If a given user is exceeding that then they are likely so problematic that you will want to ban them anyway. Speaking from personal experience, the vast majority of users never act in a way that requires attention in the first place.
It's not about whether some specific user exceeds the threshold. You have a reporting system and some double-digit percentage of users will use it as an "I disagree with this poster's viewpoint" button. Competitors will use it to try to take down the competition's legitimate content. Criminal organizations will create fake accounts or use stolen credentials and use the reporting system to extort people into paying ransom or the fake accounts will mass report the victim's account, and then if even a small percentage of the fake reports make it through the filter, the victim loses their account. Meanwhile there are legitimate reports in there as well.
You would then need enough human moderators to thoroughly investigate every one of those reports, taking into account context and possibly requiring familiarity with the specific account doing the posting to determine whether it was intended as satire or sarcasm. The accuracy has to be well in excess of 99% or you're screwed, because even a 1% false positive rate means the extortion scheme is effective because they file 1000 fake reports and the victim's account gets 10 strikes against it, and a 1% false negative rate means people make 1000 legitimate reports and they take down 990 of them but each of the 10 they got wrong has a story written about it in the newspaper.
Banning the accounts posting the actual illegal content is what they already do, but those people just make new accounts. Banning the accounts of honest people who get a lot of fake reports makes the problem worse, because it makes it easier to do the extortion scheme and then more criminals do it.
> If the law discriminates on size you don't end up with (or at least exacerbate) the oligarchy scenario. In fact it acts to counter network effects by economically incentivizing use of the smaller services.
But that was the original issue -- if you exempt smaller services then smaller services get a competitive advantage, and then you're back to the services people actually using not being required to do aggressive moderation. The only benefit then is that you got the services to become smaller, and if that's the goal then why not just do it directly and pass a law capping entity size?
The ID law, sure, I doubt the proponents of it care which alternative comes to pass (ID checks or market exit) since I expect they're opposed to the service to begin with. But that law has no size carveout, I didn't use it as an example, and I don't think it's a good law. So we're likely in agreement regarding it.
> Would the former be any different?
I expect so, yes. You've constructed a dichotomy where heavy handed moderation and failure to moderate effectively are the only possible outcomes. That seems like ideologically motivated helplessness to me.
I'm also not entirely clear what we're talking about anymore. The proposed law has to do with ID checks, the sentiment expressed was "if you don't moderate for yourselves the government will impose on you", and somehow we've arrived at you confidently claiming that decent moderation is unattainable. Yet you haven't specified the price range nor the criteria being adhered to.
The point you raise about federated networks is an interesting one, however it remains to be seen if such networks exhibit the same dynamics that centralized ones do. In the absence of a profit driven incentive for an algorithm that farms engagement we don't yet know if the same social ills will be present.
Blocking it at the service level would be significantly more effective and mean all kids would have to socialise and send using IM platforms instead.
> maybe if you can't avoid causing harm then you shouldn't be allowed to operate?
That isn't plausible to interpret as an absolute. The tradeoff is implied - as far as I can tell there isn't any other reasonable interpretation. It follows that the contextual implication is that the status quo is one of excessive harm.
Clearly others don't agree with your view that a "reasonable amount of resources" is being spent on the problem at present.
To your hand wringing about abuse of reports, nearly all of the smaller platforms I have participated on have treated that as some form of bannable offense.
Responding to reports doesn't take anywhere near as much effort as you're making out. The situation with the large centralized networks is analogous to a company that keeps cutting its IT budget while management loudly complains that it's simply impossible to get reliable infrastructure in this day and age without spending an excessive amount.
> that's the goal then why not just do it directly and pass a law capping entity size?
Because that's (quite obviously) not the goal. To date smaller venues have very good track records in my personal experience. The idea being floated was that the centralized services that actively manipulate the behavior of large portions of the population either improve theirs or be removed from the market.
Take the minor losses, be glad that conservative party didn't win, and watch the shitstorm in US and Europe from afar.
And of course not! As mentioned - the rule has even recently been extended to 'protect' people like us who live semi off-grid, with rainwater capture for drinking and a septic system.
Australians really seem to loooooove rules.
And of course, for the most part, nobody's actually checking this stuff and people pay varying levels of attention to the rules. Seems like a waste of time all round.
I didn’t say it was going to be easy. But all the middle aged guys who created all these tech startups that are rotting your children’s brains didn’t grow up with this level of technology.
So it’s clearly possible. Does the government or parents have the will to make the change? Maybe not.
Because AFAICT some of the big platforms are failing at this, before we even get into content moderation.
> Throwing a lot of manpower at moderation only gets you lots of little emperors that try to enforce their own views on others.
Do you consider dang a 'little emperor'? If anything HN seems proof that communities can thrive with moderation.
(1) For the purposes of this Act, age-restricted social media platform means:
(a) an electronic service that satisfies the following conditions:
(i) the sole purpose, or a significant purpose, of the service is to enable online social interaction between 2 or more end-users;
(ii) the service allows end-users to link to, or interact with, some or all of the other end-users;
(iii) the service allows end-users to post material on the service;
(iv) such other conditions (if any) as are set out in the legislative rules; or
(b) an electronic service specified in the legislative rules;
but does not include a service mentioned in subsection (6).I see nothing in there that talks about young people, identities, flogging anything to marketers, or treating people as product.
I don't dispute that that happens. All I'm saying is that this act is not solving that problem, isn't intended to solve that problem, and is actually part of a larger push to censor the internet for Australians.
This act, as written, requires all interactive websites that are accessible to end-users in Australia to implement age restrictions. And in order to implement age restrictions they must remove anonymity. Which is the point.
It's inaccurate rhetoric, is the point. You would have to say "maybe if you can't avoid causing excessive harm you shouldn't be allowed to operate" in order to have a reasonable statement, but then you would be inviting the valid criticism that "excessive harm" isn't what's currently happening. And dodging that criticism by eliding the qualifier is the thing I'm not inclined to let someone get away with.
> Clearly others don't agree with your view that a "reasonable amount of resources" is being spent on the problem at present.
But do they disagree based on some kind of logical reasoning or evidence, or because they have a general feeling of wanting to protect kids, which can't tell you whether any given proposal to do so will cost more than it's worth?
> To your hand wringing about abuse of reports, nearly all of the smaller platforms I have participated on have treated that as some form of bannable offense.
Which has two problems. First, if someone reports something which is on the line and you decide that it's not over the line even though it was close, you're going to ban the person who reported it? And second, the people submitting false reports as a business model don't care about getting banned because they'll just open more accounts, or they were using compromised accounts to begin with and then you're banning the accounts of innocent people who have had their machines infected with malware.
> Responding to reports doesn't take anywhere near as much effort as you're making out.
Responding to reports with high accuracy absolutely does require a large amount of resources. Consider the most common system that actually tries to do that -- and even then still often gets it wrong -- is the court system. You can't even get within two orders of magnitude of that level of resources per report while still calling it a feasible amount of resources to foist onto a private party as an unfunded mandate.
> Because that's (quite obviously) not the goal.
If not propping up megacorps is a goal -- and it should be -- then encouraging smaller services is a rewording of that goal. If you exempt smaller services and that causes smaller services to take over, the result is that the services that take over are exempt. And when that's going to be the result then you can remove the unnecessary indirection.
> To date smaller venues have very good track records in my personal experience.
One of the ways they do this is that smaller services generally have a niche, and then depending on what that niche is, they can avoid a lot of this trouble because the nature of their audience doesn't attract it.
This site is a good example. Discussion of highly contentious debates is heavily suppressed, the site doesn't support posting images or videos and the audience is such that only a specific set of topics will get any traction.
Which is fine if that's what you're looking for, and there is a place for that, but services with a different focus will attract different elements and then have more of a problem. And saying "well just don't host any of that" is the false positives problem. Should there be nowhere that can host contentious political debates or where adults can express their sexuality?
The large sites have these problems because they're general purpose and thereby attract and include all kinds of things. If you split things into special-purpose sites while still expecting the sum of them to provide full coverage then some of them can avoid the problems by limiting their scope, but then the other ones have to do it and you've only moved the problem to a different place instead of actually solving it.
If you think X domestic legislation doesn't come with its own baggage of profit motives, hidden agendas and attempts at controlling narratives for young people, you're in for a rude awakening if you dig a bit deeper.
That aside, i'd rather parents being the ones who decide what minors see instead of some hackneyed government censorship program rammed through by a plethora of boogeymen and possibly (very likely) later used to track and censor adult access to information choices.
Laws like these and their supporters can both fuck off. It's the same old story from going back centuries, nationalist, religious or generally moralizing bullshit about the supposed dangers of some nefarious influence being used to restrict what I decide to read, watch or think about.
I think people see laws and institutions encroaching on the internet as removing the 'wild west' aspect that existed on the internet in the early days. I have personally felt and have heard others express, a keen sense of nostalgia for that era. To many, more developed = less wild west.
People think of this legislation as increasing the complexity by going further away from that the more simple model. "Oh great, now I have to sign in to Google to view this" sort of thing.
I too get annoyed at small stuff like how you can't quote search all of google anymore. Things are more complex and just... different. Social media used to be a simple feed of people who you followed and not much else. The thing is, I believe the fact it's more big and complex, the fact it's the primary place many people interact- is actually why we need to legislate it.
The bill isn't legislating against Meta, or Google, or any of the big tech companies that are making the internet a worse place. If anything, the bill entrenches their place in the whole system by using their logins to identify minors.
I see nothing in this bill that will encourage the internet to be friendlier, or more creative, or less enshittified, or in any way "better". What are you seeing that I'm not?