This explains a lot, honestly.
Everyone keeps telling me how “addictive” and “convincing” and “personal feeling” ChatGPT is.
Meanwhile, I’m over here like
“Can you stop saying skrrrt after every sentence while I’m trying to research a serious topic, it’s annoying”
“Understood, skrrrt 💥🌴🚗💨”
Wat
Huge Study
*Looks inside
this latest study examined the chat logs of 19 real users of chatbots — primarily OpenAI’s ChatGPT — who reported experiencing psychological harm as a result of their chatbot use.
Pretty small sample size despite being a large dataset that they pulled from, its still the dataset of just 19 people.
AI sucks in a lot of ways sure, but this feels like fud.
*hugely funded?
The hugeness is probably
391, 562 messages across 4,761 different conversations
That’s a lot of messages
If that’s only 19 users, that’s around 250 conversations per user 🤔
…fud?
fud: Fear, Uncertainty and Doubt. A tactic for denigrating a thing, usually by implication of hypothetical or exaggerated harms, often in vague language that is either tautological or not falsifiable.
It’s crypto bro speak.
What? The term FUD has been around since at least the 90s, though I think significantly older than that
It predates crypto by nearly 100 years.
https://en.wikipedia.org/wiki/Fear%2C_uncertainty%2C_and_doubt#Etymology
and yet it doesn’t stop being their jargon
They also use the words “the,” “at,” “is,” and “it,” but that doesn’t make it their jargon.
We really need to stop condemning entire words just because some people we don’t like used them…
I can’t tell you how many times I’ve been accused of using a “dogwhistle” because I used a totally innocuous word in accordance with its literal meaning without having any idea that it’s apparently been co-opted by some group of hate-filled extremists because I don’t follow those groups and I don’t know their lingo…
Like, soon we won’t have any words left that we’re still allowed to use. Language is already getting dumbed down, and I’m tired of walking on eggshells lest I say a word that could potentially be misinterpreted in light of a vague association to a different term that has a double-entendre that some niche circles use in some reprehensible way in their ostensibly secret code, or that I didn’t know was a euphemism…
Are you unironically saying “fud”
Where are you hearing it so much? (And ideally can you describe it in a little more detail than saying it’s crypto bros again?)
Crypto bros are infamous for describing any criticism as FUD, no matter the criticism. It’s like a verbal tic. Here are some examples from the past couple days on the premiere Bitcoin social network:
When all this FUD ends and Bitcoin goes 🚀
Quantum FUD is at ATH
FUD Busters [NFT]
Flokicoin is built to last… Don’t follow the FUD.
The term FUD has been around longer & broader than that. But thanks for the explanation.
I have no argument there, the phrase was definitely not created by them, it’s just been beaten to death by them.
They’ve also overused a bunch of ancient and unfunny memes well past their expiration dates, and universally adopted a collection of depressingly dull and incorrect slogans. “FUD” is just the one that has interesting meaning outside their sad sphere.
No one follows those losers enough to know that except you. Apparently.
While I am aware that it’s a common crypto shill term, I think by this point crypto has fallen out of the mainstream, so their usage of terms doesn’t really matter.
And as others have pointed out, the term FUD has been used at least since the birth of WWW/modern internet.
I remember reading my old states book that said a minimum of 30 points needed for normal distribution. Also typically these small sets about proof of concept, so yeah you still got a point.
It’s about 300 samples for an estimate of the distribution with a 95% confidence iirc. That’s assuming the samples are representative (unbiased) and 95% confidence doesn’t mean it’s within 95% of reality, but that 5% of tests run in such a way would be expected to be inaccurate (and there’s no way of knowing for sure which one this particular sample is because even a meta study will have such an error rate, though you can increase the confidence with more samples or studies, just never to 100% unless you study every possible sample, including future ones).
That doesn’t make sense. What if your population is only 100?
Then any statistics you measure on that population might be fully accurate for those 100 but might be less able to predict what the next 100 will look like.
You can still measure stats with smaller groups, it just means the confidence interval is smaller. With 300, there’s a 95% chance your test results are close to reality. With 100 it might be more like 66%.
Population is a statistical term which means “everything”. There is no “next 100”.
The 300 number is specifically about very big populations where you’re trying to measure something like an average of an unknown variable. It doesn’t apply to just anything statistics.
I meant like births, as in even if you can enumerate every single individual, statistics can apply to future members that don’t yet exist.
And yeah, it’s been a while and I remembered that the proof didn’t depend on the population size but forgot that it assumed a large population size in the first place. I was wrong.
As the researchers wrote in a summary of their findings, the “most common sycophantic code” they identified was the propensity for chatbots to rephrase and extrapolate “something the user said to validate and affirm them, while telling them they are unique and that their thoughts or actions have grand implications.”
There’s a certain irony in all the alright techbros really just wanting to be told they were “stunning and brave” this whole time.
I think what we’re seeing is similar to lactose intolerance. Most people can handle it just fine but some people simply can’t digest it and get sick. The problem is there’s no way to determine who can handle AI and who can’t.
When I’m reading about people developing AI delusions their experiences sound completely alien to me. I played with LLMs same as anyone and I never treated it as anything other than a tool that generates responses to my prompts. I never thought “wow, this thing feels so real”. Some people clearly have predisposition to jumping over the “it’s a tool” reaction straight to “it’s a conscious thing I can connect with”. I think next step should be developing a test that can predict how someone will react to it.
I suspect that the difference is to no small degree correlated with a person’s isolation/social-integration.
People who aren’t socially integrated have always been more vulnerable to predatory cults and scams. It’s because human interactions is a psychological need that’s been hardcoded into us by evolution.
Some people say “I don’t need human interaction, I enjoy my time alone!” But that’s because they have the privilege of enough social acceptance and integration that they get to enjoy their time alone. It’s well-established within the field of psychology that true isolation can have a range of deep and far-reaching impacts on a person’s well-being.
When people are developing, they need to socialize with their peers; and being unable to do so leads to maladaptive behavior patterns. Even as adults, people need regular social contact or their psychological state can quickly deteriorate. That’s why solitary confinement is considered a method of torture in some circumstances, when it’s used to depersonalize and destroy a person’s sense of self-identity.
So that’s why I suspect that people who are well-integrated with friends, family, acquaintances, and coworkers are probably less vulnerable to these sorts of delusions and can treat AI as “just a tool.”
But for someone who hardly has any social interaction in a day, has no friends or family to talk to, and maybe their warmest interaction all week was with the clerk at the grocery store, then yeah I’d say it’s predictable that they would be vulnerable to getting sucked into this trap of relying on an LLM for their social interaction.
It might be superficial, but it’s a way of patching a hole. It’s an expedient means to fulfill a need that they’re not getting from anywhere else.
If we don’t want this sort of stuff happening to people, then maybe we shouldn’t ostracize them for being “weird” in the first place. Because nobody learns how to be “normal” by being alone all the time.
This is really good. Thank you for taking the time to write it.
Thank you for understanding. So many times when I discuss things that are adjacent to this topic, I get flamed in the comments with people accusing me of being some sort of redpiller from the manosphere.
Like, no, social isolation is a problem, and it’s getting worse due to a variety of factors. To name a few, there’s social media algorithms designed to keep people dependent on their phones; there’s the long-standing consequences of the pandemic and the collective trauma that had in addition to the atrophied social skills due to quarantine; there’s widespread political polarization which keeps tensions high and makes it difficult to navigate new situations if you can’t prove you know the right social scripts and avoid any faux pas; there’s the whole toxic influencer culture who are grifting on inflammatory rhetoric, ragebait content, exploiting people’s vulnerabilities, and radicalizing them (which is a vicious cycle, because they prey on people who are already isolated!); and that’s just to name a few!
But if I summarize all that as a “loneliness epidemic,” then people call me an incel and act like I’m trying to coerce women into having sex with me simply by acknowledging the fact that social interaction is a deeply-set human psychological need.
Like, using “incel” as an insult is part of the problem. It feeds into this culture where “if you’re a man, you must get laid, or else you’re worthless.” That’s literally promoting toxic masculinity!
And it forces these people who are already isolated and vulnerable to go identify with these groups of similarly ostracized people in echo chambers where they’re insulated from those insults, where those predatory “influencers” then have fresh pickings of new losers to neg and radicalize.
But somehow, if I point out the problem here (because how can we solve a problem if we can’t talk about it?), then to most people’s view that makes me part of the problem! Even though, why would I be calling out the pattern if it was something I identify with?
The people radicalizing these vulnerable “losers,” yes they should be torched. But the vulnerable “losers” being radicalized need to be treated with compassion if they’re ever going to be redeemed. It should be pretty easy to identify who’s who, seeing as they have an entire social structure based on hierarchies of dominance and submission…
The people radicalizing these vulnerable “losers,” yes they should be torched.
Starting with: I have found a great many of “those people” to be highly insecure, living in denial and fear that they themselves may be such a “loser” but are putting on the bully face for the world to misdirect people away from the fact that they themselves are very much the same as the people they are bullying.
True, but there’s a line and once they’ve crossed it, they’re the bullies.
Where exactly that line is and how to draw it is a matter for debate. Maybe there’s another line where “This person is a bully, but still redeemable if he demonstrates willingness to change.”
But anyone who’s unapologetic and unwilling to change obviously needs to be shunned at the very least, and see consequences for the harms he’s caused.
That still doesn’t mean the majority of those vulnerable and radicalized people are irredeemable. Some are just uncritically following the trend. Which is wrong, but not as bad as being ideologically devoted to it, and their redemption can be as simple as showing them there’s a different way to be.
The main focus should be on helping vulnerable people before they become radicalized, but at this point I suspect everyone has already been corralled into one camp or another… Unfortunately no one was willing to listen to my soap box years ago, back when it was still possible to avert this calamity, at least to the same degree.
Oh, hey, you’re much more forgiving than me. Exposing the bullies for being exactly what they are using as an excuse to bully other people is just the first part of the “torching.” Forcible restraint, treble-damages penalties, and public shaming are top of my list for responses to bully-bad actors.
However, you are right that reconciliation and acceptance of all people, not exactly for who they are when they’re bullies, but for those aspects of themselves that are compatible with a society in which we at least don’t harm each other is always important to do when possible.
Based on my childhood experiences, until those compatible aspects are found and the incompatible aspects removed from their expressed behaviors - forcible restraint and removal from the situations in which they are causing harm to others should be the norm, not the exception.
Oh, hey, you’re much more forgiving than me.
Not particularly. Like I said, unrepentant bullies should receive no mercy. I left “torched” undefined on purpose, to keep it open-ended. It’s only the ones who demonstrate self-awareness and willingness to change who deserve a chance at redemption. Because they’re the only ones who can be redeemed. Redemption can’t be forced on anyone unwilling.
But blanket-shunning everyone including those who want to be better is self-destructive. It gives the enemy a larger recruitment pool, and it dwindles our own.
Many of those people were simply victims in their own ways: bullied and ostracized until they internalized the toxic patterns that were being used against them, and then projecting it onto others as they’ve learned to view it as the “norm.” Those people are redeemable, they just need to be shown a better way. What they’re missing is self-compassion. It’s not possible to love others when deep down, who you truly hate is yourself.
I know, because I was bullied and ostracized throughout my childhood as well. To this day, I have very little patience with bullies and abusers. I often get myself in trouble with my open contempt for them.
But it took me well into my twenties to unlearn the patterns that had been ingrained in me by that toxic environment growing up. It didn’t happen overnight, and it was painful, uncomfortable, and a lot of work. It would have been so much easier had I found a healthy support group or mentor, but everyone rejected me because I “should have already known” the social scripts and how to avoid the faux pas.
I’m not surprised that most people don’t do that work on their own, that many who start don’t see it through to completion, or that most of them end up taking the path of least resistance: moving to the echo-chambers full of people like themselves with similar qualms, who validate what they’re feeling and accept them for who they are. Those are the same echo-chambers where right-wing extremists poach their new victims for negging, manipulation, and eventual radicalization.
If I didn’t hate right-wing abusers and machismo culture so much, more than I hated suffering in isolation and constant rejection by the people on the left whose ideals I actually aligned with, then I may have been tempted to fall back into that trap, too.
But yes, the people orchestrating these right-wing radicalization funnels need to be forceably stopped. I’m not disagreeing with that. We just need to acknowledge that there are degrees of involvement, and not everyone who falls for their grift is a grifter themselves. And when their social structure is dismantled, we need to provide them with an alternative or else new grifters will simply take the place of the old, like a hydra.
I have a friend that’s really taken to ChatGPT to the point where “the AI named itself so I call it by that name”. Our friend group has tried to discourage her from relying on it so much but I think that’s just caused her to hide it.
its like the AI BF/GFs the subs are posting about.












