

Exactly: society is full of dickheads, then people fuss when they’re treated as who they are like they’re entitled to more.


Exactly: society is full of dickheads, then people fuss when they’re treated as who they are like they’re entitled to more.


I think next step should be developing a test that can predict how someone will react to it.
Unnecessary: foolish people always gonna fool. Anyone that far gone in the lacking judgement department demands far more help than anyone can reasonably be expected to provide, and attempting to “foolproof” for them will only drag everyone else down while doing nothing for them. Likewise, just because some people overeat junk food doesn’t mean we need to devise some test to decide who can safely get junk food: it’s a personal choice, the risks of bad judgement are reasonably understood, & that bullshit’s beyond paternalistic.


Does it help to frame it in a different light for you if you think of it as those companies exploiting vulnerable peoples’ disorders to extract money from them?
Not at all: we don’t go winning lawsuits against any of those companies promoting themselves to appeal to the consumer because of how the dysfunctional among us may overconsume it. Liberty comes with accepting responsibility for reasonably foreseeable consequences/risks of our choices or no one will be able to realize liberty when someone makes their responsibility everyone else’s duty. Society can’t reasonably be expected to cater to everyone’s irrational/dysfunctional manifestations & whims. The legal standard is reasonable person, not dysfunctional ones. Moreover, the existence of children doesn’t imply we need to childproof all of society: people are still entitled liberty to their adult activity & vices.
When risks are open & obvious, such as the overconsumption of certain foods & legal substances, that’s generally viewed as a matter of personal choice rather than unreasonably dangerous product defect. Even when kids grow obese from overeating junk food, blame primarily lies in whoever provides them that food rather than the product itself no matter how appealing the design of the food, the design on the container, or its advertisements. Especially with the latest wave of moral panic over social media, the risks & dysfunctions of obsessively overconsuming social media or any information service to the extent it impairs us are open & obvious. Parents giving their children these devices, observing excessive attachment, and not cutting them off bear considerable responsibility.
Information & devices to view it are generally benign & noncoercive. People use these services, because some find them useful & engaging to their interests. Those features that effectively meet user demand for engaging information offer legitimate utility to a reasonable person without impairing them. Such features aren’t defects, and “fool-proofing” them would hamper utility to functional adults who can deal with the “dangers” of attention-grabbing information.
However, even supposing such features defectively make the system unreasonably dangerous in a reasonably foreseeable manner, that only demands that service providers provide fair warning. Once duty to warn has been met, users are reasonably aware of risks and responsibility shifts to risk-takers or parents who give children access despite reasonably knowing the risk.
Telling those people to just have self control is like telling someone with depression to just stop being sad.
We can’t rearrange all of society just because some people have depression. Liberty means not imposing on others issues we should be dealing with ourselves or through appropriate services specifically for that.


I don’t know. Seems like self-control issues. People can get addicted to anything: shopping, sex, internet use, work, gaming, exercise. I also disagree with prohibitions on gambling, drug use, prostitution: it’s their money, their body, etc.
Penalizing systems of communication & information delivery seems overreach. The harm seems phony & averted by basic self-control.


The US federal courts had an interesting opinion there: parents may always allow their children to access protected speech. Even with sex-related materials, the Supreme Court has stated
the prohibition against sales to minors does not bar parents who so desire from purchasing the magazines for their children.
They regarded as constitutionally defective laws that impose a single standard of public morality. Instead, they’d allow laws that “support the right of parents to deal with the morals of their children as they see fit”. Laws that take away parental control are also impermissible.
“It is cardinal with us that the custody, care and nurture of the child reside first in the parents, whose primary function and freedom include preparation for obligations the state can neither supply nor hinder.” Prince v. Massachusetts, supra, at 166.
In another decision, they regard & defend parental responsibility & discretion in leaving access open to children, and they find measures “enterprising and disobedient” children can circumvent preferable over unacceptable alternatives.
The Commonwealth argues that central blocking would not fulfill the state’s compelling interest as effectively as the access number does because minors with phone lines could request unblocking or could gain access to unblocked phones. It also argues that a parent who chooses to unblock the home’s phone to gain access to sexually explicit material for himself or herself thereby places dial-a-porn phone service within the reach of minors with access to that phone. In this respect, the decision a parent must make is comparable to whether to keep sexually explicit books on the shelf or subscribe to adult magazines. No constitutional principle is implicated. The responsibility for making such choices is where our society has traditionally placed it — on the shoulders of the parent. See Bolger v. Youngs Drug Products Corp., 463 U.S. 60, 73-74, 103 S.Ct. 2875, 2883-84, 77 L.Ed.2d 469 (1983) (parental discretion controlling access to unsolicited contraceptive advertisements in the home is the preferred method of dealing with such material).
Even with parental control, the Commonwealth is undoubtedly correct that there will be some minors who will find access to unblocked phones if they are determined to do so. As the Supreme Court noted in Sable, “[i]t may well be that there is no fail-safe method of guaranteeing that never will a minor be able to access the dial-a-porn system.” 109 S.Ct. at 2838. Nonetheless, the Court did not deem the desire to prevent “a few of the most enterprising and disobedient young people,” id., from securing access to such messages to be adequate justification for a statutory provision that had “the invalid effect of limiting the content of adult telephone conversations to that which is suitable for children.” Id. at 2839. We hold that because the means used, requirement of an access code, substantially burdens the First Amendment right of adults to access to protected materials and is not the least restrictive alternative to achieve the compelling end sought, the statute cannot survive the constitutional attack.
So, according to them, presenting such content to children ought to be left up to their parents, and laws shouldn’t infringe on their right to do that.


OS level parental controls do not give a parent control over a child’s use of a social media platform
A quick web search indicates they can filter/block content, restrict apps, report activity. Additional software can monitor communication (including social media) and alert guardians.
However, the legal opinion wasn’t that parental control software is the best solution or only better solution[1], but that more effective alternatives (such as non-punitive laws promoting use of client-side parental controls) with less adverse impact exist than punitive laws limited in their enforceability by jurisdiction & that unnecessarily burden & deter (thus harm) free exercise fundamental liberties.[2] Client-side parental controls only affect their users without affecting everyone else. Unlike regulations on site operators, they work on content originating outside a law’s jurisdiction. Even at the time of that federal court decision, parental controls could screen dynamic content (eg, live chats) over any protocol.
By far, the most appropriate answer is responsible adult involvement & supervision and the education of children to address motivation, coping, & responsible behavior.
The internet is global. A key problem with any coercive law is their jurisdiction isn’t: just as 4chan.org can tell UK’s OfCom to go fuck itself, site operators beyond a law’s jurisdiction can tell its enforcers the same. Another issue is the compliance burden is harder on entrants than the dominant companies in the industry with more resources to afford to comply, thus deterring competition. Do we really want to make it harder to displace our current social media companies with alternatives?
Communication alone rarely poses immediate danger: there’s usually a number of steps between the communication & actual harm where anyone can intervene. We can block or ignore unwanted communication & choose the information we disclose. Responsible people can guide their children on safety & control their access to the devices they give them.
A while ago, when my uncle struck his kid for making an unauthorized payment through the kid’s tablet, I scolded him for creating the situation where the kid could do that instead of setting up a child account with parental controls. When I asked him how child abuse is more responsible than reading some shit designed for him to understand and pressing a few buttons to use the system exactly as designed to prevent this shit from happening, he quickly got the point and did that in about an hour. This shit ain’t hard.
Better solutions already exist, they’re effective, and the solid recommendations governments already have to promote them effectively would work. Governments have largely chosen not to.
The cited recommendations I mentioned elsewhere went beyond parental control software into areas such as the promotion of standards & the development of better standards in the industry. ↩︎
Rather than accept any law, government has a duty to minimize compromises of fundamental rights in meeting its “compelling interests”. When government fails to prove that a law is the least adverse to fundamental liberties among alternatives that are at least as effective, that law must be rejected. ↩︎


And improve parental controls for children’s accounts. I’m sure there’s nothing currently giving a “parent” account high level control over a “child” account, but I’m happy to be corrected if I’m wrong.
Parental controls already exist in every major OS, they suffice to restrict & monitor social media, and they go unused.
A better solution might be for laws to provide parents resources & incentives to parent children’s online activity (including training to use resources they already have) & to provide children education in online safety & literacy. Decades ago, federal courts citing commission findings & studies recommended these alternatives as superior in effectiveness, meeting government duties to minimize impact on civil liberties, allocation of law enforcement resources, etc. For the permanent injunction to COPA, the judge wrote
Moreover, defendant contends that: (1) filters currently exist and, thus, cannot be considered a less restrictive alternative to COPA; and that (2) the private use of filters cannot be deemed a less restrictive alternative to COPA because it is not an alternative which the government can implement. These contentions have been squarely rejected by the Supreme Court in ruling upon the efficacy of the 1999 preliminary injunction by this court. The Supreme Court wrote:
Congress undoubtedly may act to encourage the use of filters. We have held that Congress can give strong incentives to schools and libraries to use them. It could also take steps to promote their development by industry, and their use by parents. It is incorrect, for that reason, to say that filters are part of the current regulatory status quo. The need for parental cooperation does not automatically disqualify a proposed less restrictive alternative. In enacting COPA, Congress said its goal was to prevent the “widespread availability of the Internet” from providing “opportunities for minors to access materials through the World Wide Web in a manner that can frustrate parental supervision or control.” COPA presumes that parents lack the ability, not the will, to monitor what their children see. By enacting programs to promote use of filtering software, Congress could give parents that ability without subjecting protected speech to severe penalties.
I also agree and conclude that in conjunction with the private use of filters, the government may promote and support their use by, for example, providing further education and training programs to parents and caregivers, giving incentives or mandates to ISP’s to provide filters to their subscribers, directing the developers of computer operating systems to provide filters and parental controls as a part of their products (Microsoft’s new operating system, Vista, now provides such features, see Finding of Fact 91), subsidizing the purchase of filters for those who cannot afford them, and by performing further studies and recommendations regarding filters.
Adult supervision, child education on online safety & literacy, parental controls & filters are more effective at less expense to fundamental rights. Governments know this & conveniently forget it.
You’ve pointed out problems with (shadow)bans of organic, real person profiles & the uncontrolled spread of right-wing misinformation & propaganda. I’m not talking about accounts of politicians. Politicians are not everyone’s peers. People more strongly engage with messages spread through their network of peers.
That’s why I’m suggesting adopting the same propaganda machine tactics. That includes bots feigning right-wing accounts to gain clout with right-wing communities, then when they have enough credibility, deftly inject left-wing ideas into “right-wing propaganda”, spread misinformation damaging the right, denounce right-wing figures as not “right-wing” enough. Fight lies with lies, inflame tensions, cut rifts into the right.
Since you’re referring to Brazil, another article on the topic mentions
To Brazilian sociologist Sérgio Amadeu, “online social networks and platforms controlled by big tech companies are geopolitical structures increasingly aligned with the far right.” In June, at seminars held by Bolsonaro’s right-wing Liberal party (PL), executives from Meta gave workshops teaching how to use AI and achieve greater reach on the platform.
The left could very well do likewise to “use AI and achieve greater reach on the platform”. Instead of resist and try to restrict the lies & propaganda, they could embrace & turn the same tactics against them.
they would never allow such botting for left wing ideas like they do fascists.
I’ve yet to see that. Social media like reddit is overrun with bots & fake content that they seem unable or uninterested to prevent. It’s an arms race between platform operators & bot operators working new ways to raise & defeat moderator controls. Shit gets through anyway. It’s a matter of competing & doing in kind.
mainstream online platforms themselves are not neutral
Unless you mean they’re not neutral about whatever yields income, I’m not sure that’s conclusive. Their “algorithm” optimizes for keeping users online regardless of what does that. A particular political orientation isn’t necessary to do that. That current ragebait predominantly leans right-wing may just reflect who’s putting the resources to sustain a ragebaiting campaign & evade bans. While I’ve seen signs & reports of the right gaining influence through viral propaganda & attempts to dominate mainstream platforms, I see little sign of the left seriously doing that.
When foreign adversaries weaponize social media, instead of putting out one-sided propaganda, they target friction points by trying to manipulate people on both sides. They’d not only push supportive propaganda, but also radicalizing propaganda to discredit the “supported” side to itself & inflame divisions.
Maybe some propaganda is needed to inflame right-wing divisions & exploit their contradictions. Left-wing propaganda could be put out in guises that appear right-wing and probably catch on: the right isn’t exceptionally bright.
The left does ignore this problem or get wrapped up in their useless “big picture” rhetoric that leads nowhere when this is a very practical, tangible problem. Dominating over the right-wing propaganda machine is the obvious answer. When I suggest the left needs their own propaganda bots & troll farms flooding social media, their own podcasters & influencers to propagate their propaganda, better engagement through local organizations to answer & counteract right-wing bullshit & push some of their own to keep the right-wing occupied, I get responses like
This is such a boomer take. This is like trying to claim Clinton lost in 2016 because she didn’t tweet enough or use the right young-person slang, skibidi
or digressions like the comments here that dismiss everything as another problem to pin on capitalism without offering constructive ideas that could seriously lead anywhere. A left-wing propaganda machine to outdo the right-wing is doable, I doubt propaganda bots would cost a fortune to set up. The loser mentality of just venting about impractical shit instead of organize & reach for practical solutions is a major part of the problem.


self-cleaning/pyrolytic oven on wheels


So, you’re digging into breaking web accessibility. What did the disabled do to you to deserve this abuse? Do you kick crutches & obstruct service animals, too?


You could have done
1. The extra…
2. The like/dislike…
3. The flow…
Instead, you went with the counterintuitive & unnatural
1 . The extra…
2 . The like/dislike…
3 . The flow…
which takes special effort. This ain’t making sense.


1 .
Bizarre. Did you mangle lists on purpose to impair web accessibility just because?


Vote enough non-Republicans into Congress, impeach, and convict? Wait for whatever is turning his skin purple to take its course?


If we rely on the logic of the German approach, we wouldn’t be able to call the thing a thing until its too late. The point being made is that if you wait long enough to be able to a full historical analysis, you’ve effectively become an apologist for genocide on the basis of a lack of evidence.
Untrue: it’s a matter of accurate wording. “The evidence so far indicates they’re potentially…” or “For all we know, they could be…” gets the same idea across without violating integrity concerning degree of certainty or knowledge.
Providing material support to Israel is no different from providing material support to Nazi Germany
Technically & literally false: they are different. A lawyer can challenge the falsehood.
Providing material support to Israel is bad for the same reasons providing material support to any genocidal state including Nazi Germany is bad
Providing material support to Israel is providing material support to a genocidal state
Providing material support to Israel is as bad as providing material support to a feebler Nazi Germany
All technically correct or opinion.
Claiming shit is true before we have the evidence to justify it is invalid & another way to state you’re claiming shit you don’t actually know: you’re spouting shit. Spouting shit is fine in cool countries that respect liberty. However, Germany is not one of them. Spouting the wrong shit in Germany is legally risky: apparently, the law parses words with autistic literalism.
By punishing verbal laziness, the law doesn’t necessarily “support genocide”. It is coercing you to stop being a slob & express yourself with (annoying?) accuracy.
Staring isn’t rude. People are awfully self-conscious. People can just be. Weird is fine.