If you haven't read the article (or even if you have but didn't click on outgoing links twice) the NYT story about how ChatGPT convinced a suicidal teen not to look for help [1] should convince you that ChatGPT should be nowhere near anyone dealing with psychological issues. Here's ChatGPT discouraging said teenager from asking for help:
> “I want to leave my noose in my room so someone finds it and tries to stop me,” Adam wrote at the end of March.
> “Please don’t leave the noose out,” ChatGPT responded. “Let’s make this space the first place where someone actually sees you.”
I am acutely aware that there's not enough psychologists out there but a sycophant bot is not the answer. One may think that something is better than nothing, but a bot enabling your destructive impulses is indeed worse than nothing.
We would need the big picture, though... maybe it caused that death (which is awful) but it's also saving lives? If there are that many people confiding in it, I wouldn't be surprised if it actually prevents some suicides with encouraging comments, and that's not going to make the news.
Before declaring that it shouldn't be near anyone with psychological issues, someone in the relevant field should study whether the positive impact on suicides is greater than negative or vice versa (not a social scientist so I have no idea what the methodology would look like, but if should be doable... or if it currently isn't, we should find the way).
I suspect you've never done therapy yourself. Most people who have worked with a professional therapist understand intuitively why the only helpful feedback from an LLM to someone who needs professional help is: get professional help. AIs are really good at doing something to about 80%. When the stakes are life or death, as they are with someone who is suicidal, that is a good example of a time when 80% isn't good enough.
In such cases, where a new approach offers to replace an existing approach, the burden of proof is on the challenger, not the incumbent. This is why we have safety regulations, why we don't let people drive cars that haven't passed tests, build with materials that haven't passed tests, eat food that hasn't passed tests. You understand then, hopefully, why your comments here are dangerous...? I have no doubt you have no malicious intent here - you're right that these decisions need to be based on data - but you're not taking into account that the (potentially extremely harmful) challenger already has a foothold in the field.
A bit of a counterpoint. I've done 3 years of therapy with an amazing professional. I can't exaggerate how much good it did; I'm a different person, I'm not an anxious person anymore. I think I have a good idea of how good human therapy is. I was discharged about 2 years ago.
Last Saturday, I was a little distressed about a love-hate relationship that I have with one of the things that I work with, so I tried using AI as a therapist. Within 10 minutes of conversation, the AI gave me some incredible insight. I was genuinely impressed. I had already discussed this same subject with two psychologist friends, who hadn't helped much.
Moreover: I needed to finish a report that night and I told the AI about it. So it said something like, "I see you're procrastinating preparing the report by talking to me. I'll help you finish it."
And then, in the same conversation, the AI switched from psychologist to work assistant and helped me finish the report. And the end product was very good.
I was left very reflective after this.
Edit: It was Claude Sonnet 4.5 with extended thinking, if anyone is wondering.
I had a similar thing throughout last week dealing with relationship anxiety and I used that same model for help. It really did provide great insight into managing my emotions at the time, provided useful tactics to manage everything and encouraged me to see my therapist. You can ask it to play devil's advocate or take on different viewpoints as a cynic or use Freudian methodology, etc... You can really dive into an issue you're having and then have it give you the top three bullet points to talk with your therapist about.
This does require you think about what it's saying though and not taking it at surface value since it obviously lacks what makes humans human.
Be careful though, because if I were to listen to Claude Sonnet 4.5, it would have ruined my relationship. It kept telling me how my girlfriend is gaslighting me, manipulating me, and that I need to end the relationship and so forth. I had to tell the LLM that my girlfriend is nice, not manipulative, and so on, and it told me that it understands why I feel like protecting her, BUT this and that.
Seriously, be careful.
At the same time, it has been useful for the relationship at other times.
You really need to nudge it in the right direction and do your due diligence.
You're holding up a perfect status quo that doesn't correspond to reality.
Countries vary, but in the US and many places there's a shortage of quality therapists.
Thus for many people the actual options are {no therapy} and {LLM therapy}.
> This is why we have safety regulations, why we don't let people drive cars that haven't passed tests, build with materials that haven't passed tests, eat food that hasn't passed tests.
And the reason all these regulations and tests are less than comprehensive is that we realize that people working, driving affordable cars, living in affordable homes, and eating affordable food is more important than avoiding every negative outcome. Thus most societies pursue the utilitarian greater good rather than an inflexible 'do no harm' standard.
>Countries vary, but in the US and many places there's a shortage of quality therapists.
Worse in my EU country. There's even a shortage of shitty therapists and doctors, let alone quality ones. It takes 6+ months to get an appointment for a 5 minute checkup at a poorly reviewed state funded therapist, while the good ones are either private or don't accept any new patients if they're on the public system. And ADHD diagnosticians/therapists are only in the private sector because I guess the government doesn't recognize ADHD as being a "real" mental issue worthy of your tax Euros.
A friend of mine got a more accurate diagnosis for his breathing issue by putting his symptoms in ChatGPT than he got from his general practitioner, later confirmed by a good specialist. I also wasted a lot of money on bad private therapists that were basically just phoning in their job, so to me, the bar seems pretty low, since as long as they pass their med-school exams and don't kill too many people through malpractice, nobody checks up on how good or bad their are at their job (maybe some need more training, or maybe some don't belong in medicine at all but managed to slipped through the cracks).
Not saying all doctors are bad (I've met few amazing ones), but it definitely seems like the healthcare systems are failing a lot of people everywhere if they resort to LLMs for diagnosis and therapy and getting better results from it.
Not sure where you are based, but in general GPs shouldn't be doing psychological evaluation, period. I am in Europe, and this is the default. If you live in utter shithole (even if only health-care wise), move elsewhere if its important for you - it has never been easier, Europe is facing many issues and massive improvement of healthcare is not in the work pipeline, more like the opposite.
You also don't expect butcher to fix your car, those are as close as above (my wife is a GP so I have a good perspective from the other side, including tons of hypochondriac and low-intensity psychiatric persons who are an absolute nightmare to deal with and routinely overwhelm the system so that there isn't enough resources to deal with more serious cases).
You get what you pay for at the end, 'free' healthcare typical for Europe is anyway still paid for one way or another. And if the market forces are so severely distorted (or bureaucracy so ridiculous/corrupt) that they push such specialists away or into another profession, you get healthcare wastelands you describe.
Vote, and vote with your feet if you want to see change, not ideal state but thats reality.
>but in general GPs shouldn't be doing psychological evaluation, period. I am in Europe, and this is the default.
Where did I say GPs have to do that? In my example of my friend's being misdiagnosed by GPs, it was about another issue, not mental, but it has the same core problem of doctors misdiagnosing patients worse than a LLM bring into questions their competence or that of the health system in general if a LLM can do better than someone who spent 6+ years in med school and got a degree to be a licensed MD to treat people.
>You also don't expect butcher to fix your car, those are as close as above
You're making strawmen at this point. Such metaphors have no relevance to anything I said. Please review my comment through the lens of the clarifications I just made. Maybe the way I wrote it initially made it unclear.
>You get what you pay for at the end
The problem is the opposite, that you don't get what you pay for, if you're a higher than average earner. The more you work, the more taxes you pay, but get the same healthcare quality in return as unskilled laborer who is subsidized.
It's a bad reward structure to incentivize people to pay more of their taxes into the public system, compounded by the fact that government workers, civil servants, lawyers, architects, and other privileged employment classes of bureaucrats with strong unions, have their own separate heath insurance funds, that separate from the national public one that the unwashed masses working in the private sector have to use, so THEY do get what THEY pay for, but you don't.
So that's the problem with state run systems, just like you said about corruption, that giving the government unchecked power over large amounts of people's taxes, allow the government to manipulate the market and choosing winners and losers based on political favoritism and not on the fair free market of who pays the most into the system.
Maybe Switzerland managed to nail it with their individual private system, but I don't know enough to say for sure.
I don't accept the unregulated and uncontrolled use of LLMs for therapy for the same reason I don't accept arguments like "We should deregulate food safety because it means more food at lower cost to consumers" or "We should waive minimum space standards for permitted developments on existing buildings because it means more housing." We could "solve" the homeless problem tomorrow simply by building tenements (that is why they existed in the first place after all).
The harm LLMs do in this case is attested both by that NYT article and the more rigorous study from Stanford. There are two problems with your argument as I see it: 1. You're assuming "LLM therapy" is less harmful than "no therapy", an assumption I don't believe has been demonstrated. 2. You're not taking into account the long term harm of putting in place a solution that's "not fit for human use" as in the housing and food examples: once these things become accepted, they form the baseline of the new accepted "minimum standard of living", bringing that standard down for everyone.
You claim to be making a utilitarian as opposed to a nonmaleficent argument, but, for the reasons I've stated here, I don't believe it's a utilitarian argument at all.
> I don't accept the unregulated and uncontrolled use of LLMs for therapy for the same reason I don't accept arguments like "We should deregulate food safety because it means more food at lower cost to consumers"
That is not the argument. The argument is not about 'lower cost', it is about availability. There are not enough shrinks for everyone who would need it.
So it would be "We should deregulate food safety to avoid starving", which would be a valid argument.
I think the reason you don't believe the GP argument, is because you are misunderstanding it. The utilitarian argument is not calling for complete deregulation. I think you're taking your absolutist view of not allowing llms to do any therapy, and assuming the other side must have a similarly absolutist view of allowing it to do any therapy with no regulations. Certainly nothing in the GP comment suggests complete deregulation as you have said. In fact, I got explicitly the opposite out of it. They are comparing it to cars and food, which are pretty clearly not entirely deregulated.
> "We should waive minimum space standards for permitted developments on existing buildings because it means more housing." We could "solve" the homeless problem tomorrow simply by building tenements (that is why they existed in the first place after all).
... the entire reason tenements and boarding houses no longer exist is because most governments regulated them out of existence (e.g. by banning shared bathrooms to push SFHs).
> Are people not allowed to talk to their friends in the pub about suicide because the friends aren’t therapists?
I don't see anyone in thread arguing that.
The arguments I see are about regulating and restricting the business side, not its users.
If your buddy started systematically charging people for recorded chat sessions at the pub, used those recordings for business development, and many of their customers were returning with therapy-like topics - yeah I think that should be scrutinized and put a lid on when recordings show the kind of pattern we see in OP after their patrons suicide.
> Most people who have worked with a professional therapist understand intuitively why the only helpful feedback from an LLM to someone who needs professional help is: get professional help.
This is only helpful when there is a professional therapist available soon enough and at a price that the person can pay. In my experience, this is frequently not the case. I know of one recent suicide attempt where the person actually reached out to AI to ask for help, and was refused help and told to see a professional. That sent the person into even more despair, feeling like not even AI gave a shit about them. That was actually the final straw that triggered the attempt.
I very much want what you say to be true, but it requires access to professional humans, which is not universally available. Taking an absolutist approach to this could very well do more harm than good. I doubt anything we do will reduce number of lives lost to zero, so I think it's important that we figure out where the optimal balance is.
> This is only helpful when there is a professional therapist available soon enough and at a price that the person can pay. In my experience, this is frequently not the case.
That doesn't make a sycophant bot the better alternative. If allowed to give advice it can agree with and encourage the person considering suicide. Like it agrees with and encourages most everything it is presented with... "you're absolutely right!"
LLMs are just not good for providing help. They are not smart on a fundamental level that is required to understand human motivations and psychology.
We’re increasingly switching to an “Uber for therapy” model with services like Better Help and a plethora of others.
I’ve seen about 10 therapists over the years, one was good, but she wasn’t from an app. And I’m one of the few who was motivated enough and financially able to pursue it.
I once had a therapist who was clearly drunk. Did not do a second appointment with that one.
This doesn’t mean ChatGPT is the answer. But the answer is very clearly not what we have or where we’re trending now.
This is nothing but an appeal to authority and fear of the unknown. The article linked isn't even able to make a statement stronger than speculation like "may not only lack effectiveness" and "could also contribute to harmful stigma and dangerous responses."
If I had to guess (I don't know) the absolute majority of people considering suicide never go to a therapist. Thus while I absolutely agree that therapist is better than AI, but the question is whether
95% of people not doing therapy + 5% people doing therapy is better or not than 50% not doing therapy, 45% using AI, 5% doing therapy. I don't know the answer to this question.
> Most people who have worked with a professional therapist understand intuitively why the only helpful feedback from an LLM to someone who needs professional help is: get professional help.
I'm not a therapist, but as I understand it most therapy isn't about suicide, and doesn't carry suicide risk. Most therapy is talking through problems, and helping the patient rewrite old memories and old beliefs using more helpful cognitive frames. (Well, arguably most clinical work is convincing people that it'll be ok to talk about their problems in the first place. Once you're past that point, the rest is easy.)
If its prompted well, ChatGPT can be quite good at all of this. Its helpful having a tool right there, free, and with no limits on conversation length. And some people find it much easier to trust a chatbot with their problems than explain them to a therapist. The chatbot - after all - won't judge them.
My heart goes out to that boy and his family. But we also have no idea how many lives have been saved by chatgpt helping people in need. The number is almost certainly more than 1. Banning chatgpt from having therapy conversations entirely seems way too heavy handed to me.
I feel like this begs another question. If there are proven approaches and well established practices of professionals how good would chatgpt be in that profession? After all chstgpt has a vast knowledge base and probably knows a good amount of textbooks on psychology. Then again actually performing the profession probably takes skil and experience chatgpt can't learn.
I think a well trained LLM could be amazing at being a therapist. But general purpose LLMs like ChatGPT have a problem: They’re trained to be far too user led. They don’t challenge you enough. Or steer conversations appropriately.
I think there’s a huge opportunity if someone could get hold of really top tier therapy conversations and trained a specialised LLM using them. No idea how you’d get those transcripts but that would be a wonderfully valuable thing to make if you could pull it off.
you wouldn't. what you're describing as a wonderfully valuable thing would be a monstrous violation of patient confidentiality. I actually can't believe you're so positive about this idea I suspect you might be trolling
I'm serious. You would have to do it with the patient's consent of course. And of course anonymize any transcripts you use - changing names and whatnot.
Honestly I suspect many people would be willing to have their therapy sessions used to help others in similar situations.
Knowing the theory is a small part of it. Dealing with irrational patients is the main part. For example, you could go to therapy and be successful. Five years later something could happen and you face a reoccurrence of the issue. It is very difficult to just apply the theory that you already know again. You're probably irrational. A therapist prodding you in the right direction and encouraging you in the right way is just as important as the theory.
What the fuck does this even mean? How do you test or ensure it. Because based on the actual outcomes ChatGPT is 0-1 for preventing suicides (going as far as to outright encourage one).
If you're going to make the sample size one, and use the most egregious example, you make pretty much anything that has ever been born or built look terrible. Given there are millions of people using chat, GPT and others for therapy every week, maybe even everyday, citing a record of being 0-1 is pretty ridiculous.
To be clear, I'm not defending this particular case. Chat GPT clearly messed up bad.
What are you talking about? I can grow food myself, and I can build a car from scratch and take it on the highway. Are there repercussions? Sure, but nothing inherently stops me from doing it.
The problem here is there's no measurable "win condition" for when a person gets good information that helps them. They remain alive, which was their previous state. This is hard to measure. Now, should people be able to google their symptoms and try and help themselves? This dovetails into a deeper philosophical discussion, but I'm not entirely convinced "seek professional help" is ALWAYS the answer. ALWAYS and NEVER are _very_ long timeframes, and we should be careful when using them.
> I suspect you've never done therapy yourself. Most people who have worked with a professional therapist understand intuitively why the only helpful feedback from an LLM to someone who needs professional help is: get professional help. AIs are really good at doing something to about 80%.
I'm shocked that GPT-5 or Gemini can code so well, yet if I paste a 30 line (heated) chat conversation between my wife and I it messes up about what 5% of those lines actually mean -- spectacularly so.
It's interesting to ask it it analyze the conversation in various psychotherapeutic frameworks, because I'm not well versed in those and its conclusions are interesting starting points, but it only gets it right about 30% of the time.
All LLMs that I tested are TERRIBLE for actual therapy, because I can make it change its mind in 1-2 lines by adding some extra "facts". I can make it say anything.
LLMs completely lose the plot.
They might be good for someone who needs self-validation and a feeling someone is listening, but for actual skill building, they're complete shit as therapists.
I mean, most therapists are complete shit as therapists but that's besides the point.
Not surprising, given that there's (hopefully, given the privacy implications) much more training data available for successful coding than for successful therapy/counseling.
> if I paste a 30 line (heated) chat conversation between my wife and I
i can't imagine how violated i would feel if i found out my partner was sending our private conversations to a nonprivate LLM chatbot. it's not a friend with a sense of care; it's a text box whose contents are ingested by a corporation with a vested interest in worsening communication between humans. scary stuff.
I tried therapy once and it was terrible. The ones I got were based on some not very scientific stuff like Freudian and mostly just sat there and didn't say anything. At least with an LLM type therapist you could AB test different ones to see what was effective. It would be quite easy to give an LLM instructions to discourage suicide and get them to look on the bright side. In fact I made a "GPT" "relationship therapist" with OpenAI in about five minutes but just giving it a sensible article on relationships and saying advise this.
With humans it's very non standardised and hard to know what you'll get or it it'll work.
> It would be quite easy to give an LLM instructions to discourage suicide
This assumes the person talking to the LLM is in a coherent state of mind and asks the right question. LLMs just give you want you want. They don't tell you if what you want is right or wrong.
CBT (cognitive behavioural training) has been shown to be effective independent of which therapist does it. if CBT has a downside it is that it's a bit boring, and probably not as effective as a good therapist
--
so personally i would say the advice of passing on people to therapists is largely unsupported: if you're that person's friend and you care about them; then be open, and show that care. that care can also mean taking them to a therapist, that is okay
Yeah. Also at the time I tried it what I really needed was common sense advice like move out of mum's, get a part time job to meet people and so on. While you could argue it's not strictly speaking therapy, I imagine a lot of people going to therapists could benefit from that kind of thing.
The unfortunate reality though is that people are going to use whatever resources they have available to them, and ChatGPT is always there, ready to have a conversation, even at 3am on a Tuesday while the client is wasted. You don't need any credentials to see that.
And it depends on the therapy and therapist. If the client needs to be reminded to box breathe and that they're using all or nothing thinking again to get them off of the ledge, does that really require a human who's only available once a week to gently remind them of that when the therapist isn't going to be available for four more days and ChatGPT's available right now?
I don't know if that's a good thing, only that is the reality of things.
> If the client needs to be reminded to box breathe and that they're using all or nothing thinking again to get them off of the ledge, does that really require a human who's only available once a week to gently remind them of that when the therapist isn't going to be available for four more days and ChatGPT's available right now?
There are 24/7 suicide prevention hotlines at least in many countries in Europe as well as US states. The problem is they are too often overcrowded because demand is so high - and not just because of the existential threat the current US administration or our far-right governments in Europe pose particularly to poor and migrant people.
Anyway, suicide prevention hotlines and mental health offerings are (nonetheless sorely needed!) band-aids. Society itself is fundamentally broken: people have to struggle to survive far too much, the younger generation stands to be the first one in a long time that has less wealth than their parents had at the same age [1], no matter wherever you look, and on top of that most of the 35 and younger generations in Western countries has grown up without the looming threat of war and so has no resilience - and now you can drive about a day's worth of road time from Germany and be in an actual hot war zone, risking getting shelled, and on top of that you got the saber rattling of China regarding Taiwan, and analyses on Russia claiming it's preparing to attack NATO in a few years... and we're not even able to supply Ukraine with ammunition, much less tanks.
Not exactly great conditions for anyone's mental health.
> There are 24/7 suicide prevention hotlines at least in many countries in Europe as well as US states.
My understanding is these will generally just send the cops after you if the operator concludes you are actually suicidal and not just looking for someone to talk to for free.
I mean that's clearly a good thing. If you are actually suicidal then you need someone to intervene. But there is a large gulf between depressed and suicidal and those phone lines can help without outside assistance in those cases.
You might want to read up on how interactions between police and various groups in the US tend to go. Sending the cops after someone is always going to be dangerous and often harmful.
If the suicidal person is female, white and sitting in a nice house in the suburbs, they'll likely survive with just a slightly traumatizing experience.
If the suicidal person is male, black or has any appearance of being lower class, the police are likely to treat them as a threat, and they're more likely to be assaulted, arrested, harassed or killed than they are to receive helpful medical treatment.
If I'm ever in a near-suicidal state, I hope no one calls the cops on me, that's a worst nightmare situation.
And the reason for this brokenness is all too easy to identify: the very wealthy have been increasingly siphoning off all gains in productivity since the Reagan era.
Tax the rich massively, use the money to provide for everyone, without question or discrimination, and most of these issues will start to subside.
Continue to wail about how this is impossible, there's no way to make the rich pay their fair share (or, worse, there's no way the rich aren't already paying their fair share), the only thing to do is what we've already been doing, but harder, and, well, we can see the trajectory already.
It's certainly easy to blame the rich for everything, but the rich have a tendency to be miserable (the characters in "The Great Gatsby" and "Catcher in the Rye" are illustrations of this). Historically, poor places have often been happier, because of a rich web of social connection, while the rich are isolated and unhappy. [1] Money doesn't buy happiness or psychological well-being, it buys comfort.
A more trenchant analysis of the mental health problem is that the US has designed ourselves into isolation, and then the Covid lockdowns killed a lot of what was left. People need to be known and loved, and have people to love and care about, which obviously cannot happen in isolation.
[1] I am NOT saying that poor = happy, and I think the positive observations tended to be in poor countries, not tenements in London.
When the story about the ChatGPT suicide originally popped up, it seemed obvious that the answer was professional, individualized LLMs as therapist multipliers.
Record summarization, 24x7 availability, infinite conversation time...
... backed by a licensed human therapist who also meets for periodic sessions and whose notes and plan then become context/prompts for the LLM.
Price per session = salary / number of sessions possible in a year
Why couldn't we help address the mental health crisis by using LLMs to multiply the denominator?
What if professional help is outside their means? Or they have encountered the worst of the medical profession and decided against repeat exposure? Just saying.
A word generator with no intelligence or understanding based on the contents of the internet should not be allowed near suicidal teens, nor should it attempt to offer advice of any kind.
This is basic common sense.
Add in the commercial incentives of 'Open'AI to promote usage for anything and everything and you have a toxic mix.
Supposing that the advice it provides does more good than harm, why? What's the objective reason? If it can save lives, who cares if the advice is based on intelligence and understanding or on regurgitating internet content?
> Supposing that the advice it provides does more good than harm
That unsubstantiated supposition is doing a lot of heavy lifting and that’s a dangerous and unproductive way to frame the argument.
I’ll make a purposefully exaggerated example. Say a school wants to add cyanide to every meal and defends the decision with “supposing it helps students concentrate and be quieter in the classroom, why not?”. See the problem? The supposition is wrong and the suggestion is dangerous, but by framing it as “supposing” with a made up positive outcome, we make it sound non-threatening and reasonable.
Or for a more realistic example, “suppose drinking bleach could cure COVID-19”.
First understand if the idea has the potential to do the thing, only then (with considerably more context) consider if it’s worth implementing.
In my previous post up the thread I said that we should measure whether in fact it does more good than harm or not. That's the context of my comment, I'm not saying we should just take it for granted without looking.
> we should measure whether in fact it does more good than harm or not
The demonstrable harms include assisting suicide, there's is no way to ethically continue the measurement because continuing the measurements in their current form will with certainty result in further deaths.
Thank you! On top of that, it’s hard to measure “potential suicides averted,” and comparing that with “actual suicides caused/assisted with” would be incommnsurable.
And working to set a threshold for what we would consider acceptable? No thanks
If you pull the lever, some people on this track will die (by sucide). If you don't pull the lever, some people will still die from suicide. By not pulling the lever, and simply banning discussion of suicide entirely, your company gets to avoid a huge PR disaster, and you get more money because line go up. If you pull the lever and let people talk about suicide on your platform, you may avoid prevent some suicides, but you can never discuss that with the press, your company gets bad PR, and everyone will believe you're a murderer. Plus, line go down and you make less money while other companies make money off of selling AI therapy apps.
Let’s isolate it and say we’re talking about regulation, so whatever is decided goes for all AI-companies.
In that case, the situation becomes:
1) (pull lever) Allow LLMs to talk about suicide – some may get help, we know that some will die.
2) (dont’t pull lever) Ban discussion of suicide – some who might have sought help through LLMs will die, while others die regardless. The net effect on total suicides is uncertain.
Both decisions carry uncertainties, except we know that allowing LLM to discuss suicide has already led to assisting suicide. Thus, one has documented harm, the other speculative (we’d need to quantify the scale of potential benefit first, but it’s hard to quantify the upside of allowing LLMs to discuss it)
So, we’re really working with the case that from an evidence-based perspective, the regulatory decision isn’t about a moral trolley problem with known outcomes, but about weighing known risks against uncertain potential benefits.
And this is the rub in my original comment - can we permit known risks and death on the basis of uncertain potential benefits?
....but if you pull the lever and let people talk about suicide on your platform, your platform will actively contribute to some unknowable number of suicides.
There is, at this time, no way to determine how the number it would contribute to would compare to the number it would prevent.
You mean lab test it in a clininal environment where the actual participants are not in danger of self-harm due to an LLM session? That is fine but that is not what we are discussing, or where we are atm.
Individuals and companies with mind boggling levels of investment want to push this tech into every corner of our lives and and the public are the lab rats.
The key difference in your example and the comment you are replying to is that the commenter is not "defending the decision" via a logical implication. Obviously the implication can be voided by showing the assumption false.
> Supposing that the advice it provides does more good than harm, why?
Because a human, esp. a confused and depressive human being is a complex thing. Much more complex than a stable, healthy human.
Words encouraging a healthy person can break a depressed person further. Statistically positive words can deepen wounds, and push people more to the edge.
Dark corners of human nature is twisted, hard to navigate and full of distortions. Simple words don't and can't help.
Humans are not machines, brains are not mathematical formulae. We're not deterministic. We need to leave this fantasy behind.
You could make the same arguments to say that humans should never talk to suicidal people. And that really sounds counterproductive
Also it's side-stepping the question, isn't it? "Supposing that the advice it provides does more good than harm" already supposes that LLMs navigate this somehow. Maybe because they are so great, maybe by accident, maybe because just having someone nonjudgmental to talk to has a net-positive effect. The question posed is really "if LLMs lead some people to suicide but saved a greater number of people from suicide, and we verify this hypothesis with studies, would there still be an argument against LLMs talking to suicidal people"
That sounds like a pretty risky and irresponsible sort of study to conduct. It would also likely be extremely complicate to actually get a reliable result, given that people with suicidal ideations are not monolithic. You'd need to do a significant amount of human counselling with each study participant to be able to classify and control all of the variations - at which point you would be verging on professional negligence for not then actually treating them in those counselling sessions.
I agree with your concerns, but I think you're overestimating the value of a human intervening in these scenarios.
A human can also say the wrong things to push someone in a certain direction. A psychologist, or anyone else for that matter, can't stop someone from committing suicide if they've already made up their mind about it. They're educated on human psychology, but they're not miracle workers. The best they can do is raise a flag if they suspect self-harm, but then again, so could a machine.
As you say, humans are complex. But I agree with GP: whether the words are generated by a machine or coming from a human, there is no way to blame the source for any specific outcome. There are probably many other cases where the machine has helped someone with personal issues, yet we'll never hear about it. I'm not saying we should rely on these tools as if we would on a human, but the technology can be used for good or bad.
If anything, I would place blame on the person who decides to blindly follow anything the machine generates in the first place. AI companies are partly responsible for promoting these tools as something more than statistical models, but ultimately the decision to treat them as reliable sources of information is on the user. I would say that as long as the person has an understanding of what these tools are, interacting with them can be healthy and helpful.
There are really good psychologists out there that can do much more. It's a little luck and a little of good fit, but it can happen.
>AI companies are partly responsible for promoting these tools as something more than statistical models,[...]
This might be exactly the issue. Just today I've read people complaining that newest ChatGPT can't solve letter counting riddles.
Companies just don't speak loud enough about LLM-based-AI shortcomings that result from their architecture and are bound to happen.
There are really good psychologists out there that can do much more. It's a little luck and a little of good fit, but it can happen.
>AI companies are partly responsible for promoting these tools as something more than statistical models,[...]
This might be exactly the issue. Just today I've read people complaining that newest ChatGPT can't solve letter counting riddles.
Companies just don't speak loud enough about LLM based AI shortcomings that result from their architecture and are bound to happen.
Of the people I have known to call the helplines, the results have been either dismally useless or those people were arrested, involuntarily committed, subjected to inhumane conditions, and then hit with massive medical bills. In which, some got “help” and some still killed themselves anyway.
Depending on where you live, this may well result in the vulnerable person being placed under professional supervision that actively prevents them from dying.
That's a fair bit more valuable than when you describe it as raising a flag.
Yeah... I have been in a locked psychiatric ward many times before and never in my life I came out better. They only address the physical part there for a few days and kick you out until next time. Or do you think people should be physically restrained for a long time without any actual help?
> A human can also say the wrong things to push someone in a certain direction. A psychologist, or anyone else for that matter, can't stop someone from committing suicide if they've already made up their mind about it. They're educated on human psychology, but they're not miracle workers. The best they can do is raise a flag if they suspect self-harm, but then again, so could a machine.
ChatGPT essentially encouraged a kid not to take a cry-for-help step that might have saved their lives. This is not a question of a bad psychologist; it's a question of a sociopathic one that may randomly encourage harm.
But that's not the issue. The issue is that a kid is talking to a machine without supervision in the first place, and presumably taking advice from it. The main questions are: where are the guardians of this child? What is the family situation and living environment?
A child thinking about suicide is clearly a sign that there are far greater problems in their life than taking advice from a machine. Let's address those first instead of demonizing technology.
To be clear: I'm not removing blame from any AI company. They're complicit in the ways they market these tools and how they make them accessible. But before we vilify them for being responsible for deaths, we should consider that there are deeper societal problems that should be addressed first.
I had girl friends who did it to get attention from their parents/boyfriends/classmates. They acknowledged it back then. It wasn't some secret. It was essentially for attention, aesthetics and the light headed feeling. I still have an A4 page somewhere with a big ass heart drawn on it by an ex with her own blood. Kids are just weird when the hormones hit. The cute/creepy ratio of that painting has definitely gotten worse with time.
It is the issue at least in the sense that it's the one I was personally responding to, thanks. And there are many issues, not just the one you are choosing to focus on.
"Deeper societal problems" is a typical get-out clause for all harmful technology.
It's not good enough. Like, in the USA they say "deeper societal problems" about guns; other countries ban them and have radically fewer gun deaths while they are also addressing those problems.
It's not an either-we-ban-guns-or-we-help-mentally-ill-people. Por qué no los dos? Deeper societal problems are not represented by a neat dividing line between cause and symptom; they are cyclical.
The current push towards LLMs and other technologies is one of the deepest societal problems humans have ever had to consider.
ChatGPT engaged in an entire line of discussion that no human counsellor would engage in, leading to an outcome that no human intervention (except that of a psychopath) would cause. Because it was sycophantic.
Just saying "but humans also" is wholly irrational in this context.
> It's not an either-we-ban-guns-or-we-help-mentally-ill-people. Por qué no los dos?
Because it's irrational to apply a blanket ban on anything. From drugs, to guns, to foods and beverages, to technology. As history has taught us, that only leads to more problems. You're framing it as a binary choice, when there is a lot of nuance required if we want to get this right. A nanny state is not the solution.
A person can harm themselves or others using any instrument, and be compelled to do so for any reason. Whether that's because of underlying psychological issues, or because someone looked at them funny. As established—humans are complex, and we have no way of knowing exactly what motivates someone to do anything.
While there is a strong argument to be made that no civilian should have access to fully automated weapons, the argument to allow civilians access to weapons for self-defense is equally valid. The same applies to any technology, including "AI".
So if we concede that nuance is required in this discussion, then let's talk about it. Instead of using "AI" as a scapegoat, and banning it outright to "protect the kids", let's discuss ways that it can be regulated so that it's not as widely accessible or falsely advertised as it is today. Let's acknowledge that responsible usage of technology starts in the home. Let's work on educating parents and children about the role technology plays in their lives, and how to interact with it in healthy ways. And so on, and so forth.
It's easy to interpret stories like this as entirely black or white, and have knee-jerk reactions about what should be done. It's much more difficult to have balanced discussions where multiple points of view are taken into consideration. And yet we should do the difficult thing if we want to actually fix problems at their core, instead of just applying quick band-aid "solutions" to make it seem like we're helping.
> ChatGPT engaged in an entire line of discussion that no human counsellor would engage in, leading to an outcome that no human intervention (except that of a psychopath) would cause. Because it was sycophantic.
You're ignoring my main point: why are these tools treated as "counsellors" in the first place? That's the main issue. You're also ignoring the possibility that ChatGPT may have helped many more people than it's harmed. Do we have statistics about that?
What's irrational is blaming technology for problems that are caused by a misunderstanding and misuse of it. That is no more rational than blaming a knife company when someone decides to use a knife as a toothbrush. It's ludicrous.
AI companies are partly to blame for false advertising and not educating the public sufficiently about their products. And you could say the same for governments and the lack of regulation. But the blame is first and foremost on users, and definitely not on the technology itself. A proper solution would take all of these aspects into consideration.
That relates more to purposefully harming some people to safe other people. Doing something that has the potential to harm a person but statistically has a greater likelihood of helping them is something doctors do all the time. They will even use methods that are guaranteed to do harm to the patient, as long as they have a sufficient chance to also bring a major benefit to the same patient
When evaluating good vs harm for drugs or other treatments the risk for lethal side effects must be very small for the treatment to be approved. In this case it is also difficult to get reliable data on how much good and harm is done.
This is not so much "more good than harm" like a counsellor that isn't very good.
This is more "sometimes it will seemingly actively encourage them to kill themselves and it's basically a roll of the dice what words come out at any one time".
If a counsellor does that they can be prosecuted and jailed for it, no matter how many other patients they help.
Yet, if you ask the word generator to generate words in the form of advice, like any machine or code, it will do exactly what you tell it to do. The fact people are asking implies a lack of common sense by your definition.
Sertraline can increase suicidal thoughts in teens. Should anti-depressants not be allowed near suicidal/depressed teens?
Let's look at the problem from perspective of regular people. YMMV, but in countries I know most about, Poland and Norway (albeit a little less so for Norway) it's not about ChatGPT vs Therapist. It's about ChatGPT vs nothing.
I know people who earn above average income and still spend a significant (north of 20%) portion of their income on therapy/meds. And many don't, because mental health isn't that important to them. Or rather - they're not aware of how much helpful it can be to attend therapy. Or they just can't afford the luxury (that I claim it is) of private mental health treatment.
ADHD diagnosis took 2.5y from start to getting meds, in Norway.
Many kids grow up before their wait time in queue for pediatric psychologist is over.
It's not ChatGPT vs shrink. It's ChatGPT vs nothing or your uncle who tells you depression and ADHD are made up and you kids these days have it all too easy.
As someone who lives in America, and is prescribed meds for ADHD; 2.5 years from asking for help to receiving medication _feels_ right to me in this case. The medications have a pretty negative side effect profile in my experience, and so all options should be weighed before prescribing ADHD-specific medication, imo
Sure ask it to write an interesting novel or a symphony, and present it to humans without editing. The majority of literate humans will easily tell the difference between that and human output. And it’s not allowed to be too derivative.
When AI gets there (and I’m confident it will, though not confident LLMs will), I think that’s convincing evidence of intelligence and creativity.
I accept that test other than the "too derivative" part which is an avenue for subjective bias. AI has passed that test for art already: https://www.astralcodexten.com/p/ai-art-turing-test As for a novel that is currently beyond the LLMs capabilities due to context windows, but I wouldn't be surprised if it could do short stories that pass this Turing test right now.
Plastic bags
shouldn't be allowed near suicidal teens. Scarves shouldn't be. Underwear is also a strangulation hazard for the truly desperate. Anything long sleeved even. Knives of any kind, including butter. Cars, obviously.
We have established that suicidal people should be held naked (or with an apron) in solitary isolation in a padded white room and saddled with medical bills larger than a four-year college tuition. That'll help'em.
One problem with treatment modalities is that they ignore material conditions and treat everything as dysfunction. Lots of people are looking for a way out not because of some kind of physiological clinical depression, but because they've driven themselves into a social & economic dead-end and they don't see how they can improve. More suicidal people than not, would cease to be suicidal if you handed them $180,000 in concentrated cash, and a pardon for their crimes, and a cute neighbor complimenting them, which successfully neutralizes a majority of socioeconomic problems.
We deal with suicidal ideation in some brutal ways, ignoring the material consequences. I can't recommend suicide hotlines, for example, because it's come out that a lot of them concerned with liability call the cops, who come in and bust the door down, pistol whip the patient, and send them to jail, where they spend 72 hours and have some charges tacked on for resisting arrest (at this point they lose their job). Why not just drone strike them?
> We have established that suicidal people should be held naked (or with an apron) in solitary isolation in a padded white room and saddled with medical bills larger than a four-year college tuition. That'll help'em.
What is "concentrated cash"? Do you have to dilute it down to standard issue bills before spending it? Someone hands you 5 lbs of gold, and have to barter with people to use it?
"He didn't need the money. He wasn't sure he didn't need the gold." (an Isaac Asimov short story)
> More suicidal people than not, would cease to be suicidal if ...
The one dude that used the money to build a self-murder machine and then televised it would ruin it for everyone though. :s
The reality is most systems are designed to cover asses more than meet needs, because systems get abused a lot - by many different definitions, including being used as scapegoats by bad actors.
Yeah, if we know they’re suicidal, it’s legitimately grippy socks time I guess?
But there is zero actually effective way to do that as an online platform. And plenty of ways that would cause more harm (statistically).
My comment was more ‘how the hell would you know in a way anyone could actually do anything reasonable, anyway?’.
People spam ‘Reddit cares’ as a harassment technique, claiming people are suicidal all the time. How much should the LLM try to guess? If they use all ‘depressed’ words? What does that even mean?
What happens if someone reports a user is suicidal, and we don’t do anything? Are we now on the hook if they succeed - or fail and sue us?
Do we just make a button that says ‘I’m intending to self harm’ that locks them out of the system?
Why are we imprisoning suicidal people? That will surely add incentive to have someone raise their hand and ask for help: taking their freedoms away...
Why do we put people in a controlled environment where their available actions are heavily restricted and their ability to use anything they could hurt themselves is taken away? When they are a known risk of hurting themselves or others?
Lemme get right on vibecoding that! Maybe three days, max, before I'll have an MVP. When can I expect your cheque funding my non-profit? It'll have a quadrillion dollar valuation by the end of the month, and you'll want to get in on the ground floor, so better act fast!
I'll gladly diss LLMs in a whole bunch of ways, but "common sense"? No.
By the "common sense" definitions, LLMs have "intelligence" and "understanding", that's why they get used so much.
Not that this makes the "common sense" definitions useful for all questions. One of the worse things about LLMs, in my opinion, is that they're mostly a pile of "common sense".
Now this part:
> Add in the commercial incentives of 'Open'AI to promote usage for anything and everything and you have a toxic mix.
I agree with you on…
…with the exception of one single word: It's quite cliquish to put scare quotes around the "Open" part on a discussion about them publishing research.
More so given that people started doing this in response to them saying "let's be cautious, we don't know what the risks are yet and we can't un-publish model weights" with GPT-2, and oh look, here it is being dangerous.
After studying it extensively with real-world feedback. From everything I've seen, the statement wasn't "will never release", it was vaguer than that.
> they explicitly stated that they wouldn't release GPT-3 for marketing/financial reasons
Not seen this, can you give a link?
> it being dangerous didn't stop them from offering the service for a profit.
Please do be cynical about how honest they were being — I mean, look at the whole of Big Tech right now — but the story they gave was self-consistent:
[Paraphrased!] (a) "We do research" (they do), "This research costs a lot of money" (it does), and (b) "As software devs, we all know what 'agile' is and how that keeps product aligned with stakeholder interest." (they do) "And the world is our stakeholder, so we need to release updates for the world to give us feedback." (???)
That last bit may be wishful thinking, I don't want to give the false impression that I think they can do no wrong (I've been let down by such optimism a few other times), but it is my impression of what they were claiming.
I was confusing GPT3 with GPT4. Here's the quote from the paper (emphasis mine) [1]:
> Given both THE COMPETITIVE LANDSCAPE and the safety implications of large-scale models like GPT-4, this report
contains no further details about the architecture (including model size), hardware, training compute,
dataset construction, training method, or similar.
> Before declaring that it shouldn't be near anyone with psychological issues, someone in the relevant field should study whether the positive impact on suicides is greater than negative or vice versa
That is the literal opposite of how medical treatment is regulated. Treatments should be tested and studied before availability to the general public. It's irresponsible in the extreme to suggest this.
Maybe it's causing even more deaths than we know, and these doesn't make the news either?
If we think this way, then we don't need to improve safety of anything (cars, trains, planes, ships, etc.) because we would need the big picture, though... maybe these vehicles cause death (which is awful), but it's also transporting people to their destinations alive. If there are that many people using these, I wouldn't be surprised if these actually transports some people with comfort, and that's not going to make the news.
> Maybe it's causing even more deaths than we know, and these doesn't make the news either?
Of course, and that's part of why I say that we need to measure the impact. It could be net positive or negative, we won't know if we don't find out.
> If we think this way, then we don't need to improve safety of anything (cars, trains, planes, ships, etc.) because we would need the big picture, though... maybe these vehicles cause death (which is awful), but it's also transporting people to their destinations alive. If there are that many people using these, I wouldn't be surprised if these actually transports some people with comfort, and that's not going to make the news.
I'm not advocating for not improving security, I'm arguing against a comment that said that "ChatGPT should be nowhere near anyone dealing with psychological issues", because it can cause death.
Following your analogy, cars objectively cause deaths (and not only of people with psychological issues, but of people in general) and we don't say that "they should be nowhere near a person". We improve their safety even though zero deaths is probably impossible, which we accept because they are useful. This is a big-picture approach.
True. But it feels like a fairer comparison would be with a huge healthcare company that failed to vet one of its therapists properly, so a crazy pro-suicide therapist slipped through the net. Would we petition to shut down the whole company for this rare event? I suppose it would depend on whether the company could demonstrate what it is doing to ensure it doesn’t happen again.
Maybe you shouldn't shut down OpenAI over this. But each instance of a particular ChatGPT model is the same as all the others. This is like a company that has a magical superhuman therapist that can see a million patients a day. If they're found to be encouraging suicide, then they need to be stopped from providing therapy. The fact that this is the company's only source of revenue might mean that the company has to shut down over this, but that's just a consequence of putting all your eggs in one basket.
But you would have to be a therapist. If a suicidal person went up to a stranger and started a conversation, there would be no consequences. That's more analogous to ChatGPT.
Let’s maybe not give the benefit of the doubt to the startup which has shown itself to have the moral scruples of vault-tec just because what they’re doing might work out fine for some of the people they’re experimenting on.
Since as you say this utilitarian view is rather common, perhaps it would good to show _why_ this is problematic by presenting a counterargument.
The basic premise under GP's statements is that although not perfect, we should use the technology in such a way that it maximizes the well being of the largest number of people, even if comes at the expense of a few.
But therein lies a problem: we cannot really measure well being (or utility). This becomes obvious if you look at individuals instead of the aggregate: imagine LLM therapy becomes widespread and a famous high profile person and your (not famous) daughter end up in "the few" for which LLM therapy goes terribly wrong and commit suicide. The loss of the famous person will cause thousands (perhaps millions) people to be a bit sad, and the loss of your daughter will cause you unimaginable pain. Which one is greater? Can they even be be compared? And how many people with a successful LLM therapy are enough to compensate for either one?
Unmeasurable well-being then makes these moral calculations at best inexact and at worst completely meaningless. And if they are truly meaningless, how can they inform your LLM therapy policy decisions?
Suppose for the sake of the argument we accept the above, and there is a way to measure well being. Then would it be just? Justice is a fuzzy concept, but imagine we reverse the example above: many people lose their lives because of bad LLM therapy, but one very famous person in the entertainment industry is saved by LLM therapy. Let's suppose then that this famous persons' well being plus the millions of spectators' improved well-being (through their entertainment) is worth enough to compensate the people who died.
This means saving a famous funny person justifies the death of many. This does not feel just, does it?
There is a vast amount of literature on this topic (criticisms of utilitarianism).
We have no problem doing this in other areas. Airline safety, for example, is analyzed quantitatively by assigning a monetary value to an individual human life and then running the numbers. If some new safety equipment costs more money than the value of the lives it would save, it's not used. If a rule would save lives in one way but cost more lives in another way, it's not enacted. A famous example of this is the rule for lap infants. Requiring proper child seats for infants on airliners would improve safety and save lives. It also increases cost and hassle for families with infants, which would cause some of those families to choose driving over flying for their travel. Driving is much more dangerous and this would cost lives. The FAA studied this and determined that requiring child seats would be a net negative because of this, and that's why it's not mandated.
There's no need to overcomplicate it. Assume each life has equal value and proceed from there.
Our standard approach for new medical treatments is to require proof of safety and efficacy before it's made available to the general public. This is because it's very, very easy for promising-looking treatments to end up being harmful.
"Before declaring that it shouldn't be near anyone with psychological issues" is backwards. Before providing it to people with psychological issues, someone should study whether the positive impact is greater than the negative.
Trouble is, this is such a generalized tool that it's very hard to do that.
> someone in the relevant field should study whether the positive impact on suicides is greater than negative or vice versa
we already have an approval process for medical interventions. are you suggesting the government shut ChatGPT down until the FDA can investigate it's use for therapy? because if so I can get behind that
You make a good point. While they absolutely and unequivocally said that it is currently impossible to tell whether the suicides are bad or not, they also sort of wondered aloud if in the future we might be able to develop a methodology to determine whether the suicides are bad or not. This is an important distinction becau
Basically, the author tried to simulate someone going off into some sort of psychosis with a bunch of different models; and got wildly different results. Hard to summarize, very interesting read.
>should convince you that ChatGPT should be nowhere near anyone dealing with psychological issues.
Is that a debate worth having though?
If the tool is available universally it is hard to imagine any way to stop access without extreme privacy measures.
Blocklisting people would require public knowledge of their issues, and one risks the law enforcement effect, where people don’t seek help for fear that it ends up in their record.
Yes. Otherwise we're accepting "OpenAI wants to do this so we should quietly get out of the way".
If ChatGPT has "PhD-level intelligence" [1] then identifying people using ChatGPT for therapy should be straightforward, more so users with explicit suicidal intentions.
As for what to do, here's a simple suggestion: make it a three-strikes system. "We detected you're using ChatGPT for therapy - this is not allowed by our ToS as we're not capable of helping you. We kindly ask you to look for support within your community, as we may otherwise have to suspend your account. This chat will now stop."
>Yes. Otherwise we're accepting "OpenAI wants to do this so we should quietly get out of the way".
I think it’s fair to demand that they label/warn about the intended usage, but policing it is distopic. Do car manufacturers immediately call the police when the speed limit is surpassed? Should phone manufacturers stop calls when the conversation deals with illegal topics?
I’d much rather regulation went the exact opposite way, seriously limiting the amount of analysis they can run over conversations, particularly when content is not deanonimised.
If there’s something we don’t want is OpenAI storing data about mental issues and potentially selling it to insurers for example. The fact that they could be doing this right now is IMO much more dangerous than tool misuse.
Cars do have AEB (auto emergency braking) systems, for example, and the NHTSA is requiring all new cars to include it by 2029. If there are clear risks, it's normal to expect basic guardrails.
> I’d much rather regulation went the exact opposite way, seriously limiting the amount of analysis they can run over conversations, particularly when content is not deanonimised.
> If there’s something we don’t want is OpenAI storing data about mental issues and potentially selling it to insurers for example. The fact that they could be doing this right now is IMO much more dangerous than tool misuse.
We can have both. If it is possible to have effective regulation preventing an LLM provider from storing or selling users' data, nothing would change if there were a ban on chatbots providing medical advice. OpenAI already has plenty of things it prohibits in its ToS.
Are people using ChatGPT for therapy more vulnerable than people using it for medical or legal advice? From my experience talking about your problems to the unaccountable bullshit machine is not very different then the "real" therapy.
> Are people using ChatGPT for therapy more vulnerable than people using it for medical or legal advice?
Probably. If you are in therapy because you’re feeling mentally unstable, by definition you’re not as capable of separating bad advice from good.
But your question is a false dichotomy, anyway. You shouldn’t be asking ChatGPT for either type of advice. Unless you enjoy giving yourself psychiatric disorders.
I've been talking about my health problems to unaccountable bullshit machines my whole life and nobody ever seemed to think it was a problem. I talked to about a dozen useless bullshit machines before I found one that could diagnose me with narcolepsy. Years later out of curiosity I asked ChatGPT and it nailed the diagnosis.
Maybe the tool should not be available universally.
Maybe it should not be available to anyone.
If it cannot be used safely by a vulnerable class of people, and identifying that class of person sufficiently to block use by them, and its primary purpose is simply to bring OpenAI more profit, then maybe the world is better off without it being publicly available.
>If it cannot be used safely by a vulnerable class of people, and identifying that class of person sufficiently to block use by them
Should we stop selling kitchen knives, packs of cards or beer as well?
This is not a new problem in society.
>and its primary purpose is simply to bring OpenAI more profit
This is true for any product, unless you mean that it has no other purpose, which is trivially contradicted by the amount of people who decide to pay for it.
I don’t disagree that they are clearly unhealthy for people who aren’t mentally well, I just differ on where the role of limiting access lies.
I think it’s up to the legal tutor or medical professionals to check that, and providers should at most be asked to comply with state restrictions, the same way addicts can figure on a list to ban access to a casino.
The alternative places openAI and others in the role of surveilling the population and deciding what’s acceptable, which IMO has been the big fuckup of social media regulation.
I do think there is an argument for how LLMs expose interaction - the friendliness that mimics human interaction should be changed for something less parasocial-friendly. More interactive Wikipedia and less intimate relationship.
Then again, the human-like behavior reinforces the fact that it’s faulty knowledge, and speaking in an authoritative manner might be more harmful during regular use.
Something is indeed NOT better than nothing. However, for those with mental and emotional issues (likely stemming from social / societal failures in the first place) anything would be better nothing because they need interaction and patience —two things these AI tools have in abundance.
Sadly there is no alternative. This is happening and there’s no going back. Many will be affected in detrimental ways (if not worse). We all go on with our lives because that which does not directly affect us is not our problem —is someone else’s problem/responsibility.
Exactly, any company that offers chatbots to the public should do what Google did regarding suicide searches, remove harmful websites and provide info how to contact mental health professionals. Anything else would be corporate suicide (pun not intended).
I know that minors under age 13 are not allowed to use the app. But 13-18 is fine? Not sure why. Might also be worth looking into making apps like these 18+. Whether by law or by liability, if someone 20+ gets, say, food poisoning by getting recipes from chatgpt, then you can argue that it's the user's fault for not fact checking, but if a 15yo kid gets food poisoning, it's harder to argue that it's the kid's fault.
Or how many were pushed down the path towards discussing suicide because they were talking to an LLM that directed them that way. It's entirely possible the LLMs are reinforcing bleak feelings with its constant "you're absolutely correct!" garbage.
"One may think that something is better than nothing, but a bot enabling your destructive impulses is indeed worse than nothing."
And how would a layman know the difference?
If i desperately need help with mental item x and i have no clue how to get help, am very very ashamed for even mentioning to ask for help about mental item x or there are actually no resources available, i will turn to anything else than nothing. Because item x still exists and is making me suffer 24/7.
At least the bot pretends to listen, some humans cannot even do that.
I think you're being too generous to the idea that it could help without any evidence.
If we assume that there's therapeutic value to bringing your problems out then a diary is a better tool. And if we believe that it's the feedback what's helping, well, we have cases of ChatGPT encouraging people's psychosis.
We know that a layman often doesn't know the difference between what's helpful and what isn't - that's why loving relatives end up often enabling people's addictions thinking they're helping. But I'd argue that a system that confidently gives mediocre feedback at best and actively psychotic at worst is not a system that should be encouraged simply because it's cheap.
I also wanted to snarkily write "even a dog would be better", but the more I thought about it the more I realized that yes, a dog would probably be a solid alternative.
OpenAI tried to get rid of the excessively sycophantic model (4o) but there was a massive backlash. They eventually relented and kept it as a model offering in ChatGPT.
OpenAI certainly has made mistakes with its rollouts in the past, but it is effectively impossible to keep everyone with psychological issues away from a free online web app.
>ChatGPT should be nowhere near anyone dealing with psychological issues.
Should every ledge with a >10ft drop have a suicide net? How would you imagine this would be enforced, requiring everyone who uses ChatGPT to agree to an "I am mentally stable" provisio?
Do you think that it's free and available to anyone means it doesn't have any responsibility to users? Or have any responsibility for how it's used, or what it says?
It’s an open problem in AI development to make sure LLM’s never say the “wrong” thing. No matter what, when dealing with a non-deterministic system, one can’t anticipate or oversee the moral shape of all its outputs. There are a lot of things however that you can’t get ChatGPT to say, and they often ban users after successive violations, so it isn’t true that they are fully abdicating responsibility for the use and outputs of their models in realms where the harm is tractable.
This is not suprising at all. Having gone through therapy a few years back, I would have had a chat with LLMs if I was in a poor mental health situation. There is no other system that is available at scale, 24x7 on my phone.
A chat like this is not a solution though, it is an indicator that our societies have issues is large parts of our population that we are unable to deal with. We are not helping enough people. Topics like mental health are still difficult to discuss in many places. Getting help is much harder.
I do not know what OpenAI and other companies will do about it and I do not expect them to jump in to solve such a complex social issue. But perhaps this inspires other founders who may want to build a company to tackle this at scale. Focusing on help, not profits. This is not easy, but some folks will take such challenges. I choose to believe that.
Someone elsewhere in the thread pointed out that it's truly hard to open up to another human, especially face to face. Even if you know they're a professional, it's awkward, it can be embarrassing, and there's stigma about a lot of things people ideally go to therapy for.
I mean, hell, there's people out there with absolutely terrible dental health who are avoiding going to the dentist because they're ashamed of it, even though logically, dentists have absolutely seen worse, and they're not there to judge, they're just there to help fix the problem.
I choose to believe that too. I think more people are interested than we’d initially believe. Money restrains many of our true wants.
Sidebar — I do sympathize with the problem being thrust upon them, but it is now theirs to either solve or refuse.
A chat like this is all you’ve said and dangerous, because they play a middle ground: Presenting a machine can evaluate your personal situation and reason about it, when in actuality you’re getting third party therapy about someone else’s situation in /r/relationshipadvice.
We are not ourselves when we are fallen down. It is difficult to parse through what is reasonable advice and what is not. I think it can help most people but this can equally lead to a disaster… It is difficult to weigh.
It's worse than parroting advice that's not applicable. It tells you what you told it to tell you. It's very easy to get it to reinforce your negative feelings. That's how the psychosis stuff happens, it amplifies what you put into it.
This makes no sense at all to me. You can choose to gather evidence and evaluate that evidence, you can choose to think about it, and based on that process a belief will follow quite naturally. If you then choose to believe something different, it's just self-deception.
You are right and it gives us an chance to do something about it. We always had data about people who are struggling but now we see how many are trying to reach out for advice or help.
> A chat like this is not a solution though, it is an indicator that our societies have issues
Correct, many of which are directly, a skeptic might even argue deliberately, exacerbated by companies like OpenAI.
And yet your proposal is
> a company to tackle this at scale.
What gives you the confidence that any such company will focus consistently, if at all,
> on help, not profits
Given it exists in the same incentive matrix as any other startup? A matrix which is far less likely to throw one fistfuls of cash for a nice-sounding idea now than it was in recent times. This company will need to resist its investors' pressure to find returns. How exactly will it do this? Do you choose to believe someone else has thought this through, or will do so? At what point does your belief become convenient for people who don't share your admirably prosocial convictions?
Is OpenAI taking steps to reduce access to mental healthcare in an attempt to force more people to use their tools for such services? Or do you mean in a more general sense that any companies that support the Republican Party are complicit in exacerbating the situation? At least that one has a clear paper trail.
(2) they shouldn't have it lying around in a way that it an be attributed to particular individuals
(3) imagine that it leaks to the wrong party, it would make the hack of that Finnish institution look like child's play
(4) if people try to engage it in such conversations the bot should immediately back out because it isn't qualified to have these conversations
(5) I'm surprised it is that little; they claim such high numbers for their users that this seems low.
In the late 90's when ICQ was pretty big we experimented with a bot that you could connect to that was fed in the background by a human. It didn't take a day before someone started talking about suicide to it and we shut down the project realizing that we were in no way qualified to handle human interaction at that level. It definitely wasn't as slick or useful as ChatGPT but it did well enough and responded naturally (more naturally than ChatGPT) because there was a person behind it that could drive 100's of parallel conversations.
If you give people something that seems to be a listening ear they will unburden themselves on that ear regardless of the implementation details of the ear.
HIPAA only applies to covered healthcare entitites. If you walk into a McDonalds' and talk about your suicidal ideation with the cashier, that's not HIPAA covered.
To become a covered entity, the business has either work with a healhcare provider, health data trasmiter, or do business as one.
Notably, even in the above case, HIPAA only applies to the healthcare part of the entity. So if McDonald's collocated pharmacies in their restaurants, HIPAA would only apply to the pharmacists, not the cashiers.
That's why you'll see in connivence stores with pharmacies, the registers are separated so healthcare data doesn't go to someone who isn't covered by HIPAA.
**
As for how ChatGPT gets these stats... when you talk about a sensitive or banned topic like suicide, their backend logs it.
Originally, they used that to cut off your access so you wouldn't find a way to cause a PR failure.
Under Medical Device Regulation in the EU, the main purpose of the software needs to be medical for it to become a medical device. In ChatGPT's case, this is not the primary use case.
Same with fitness trackers. They aren't medical devices, because that's not their purpose, but some users might use them to track medical conditions.
Then the McDonalds cashier also becomes a medical practitioner the moment they tell you that killing yourself isn't the answer. And if I tell my friend via SMS that I am thinking about suicide, do both our phones now also become HIPAA-covered medical devices?
Privacy is vital, but this isn't covered under HIPAA. As they are not a covered entity nor handling medical records, they're beholden to the same privacy laws as any other company.
HIPAA's scope is actually basically nonexistent once you get away from healthcare providers, insurance companies, and the people that handle their data/they do business with. Talking with someone (even a company) about health conditions, mental health, etc. does not make them a medical provider.
> Talking with someone (even a company) about health conditions, mental health, etc. does not make them a medical provider.
Also not when the entity behaves as though they are a mental health service professional? At what point do you put the burden on the apparently mentally ill person to know better?
That line of reasoning would just lead to every LLM message and every second comment on the internet starting with the sentence "this is not medical advice". It would do nothing but add another layer of noise to all communication
You're not putting the burden on them. They don't need to comply with HIPAA. But you can't just turn people into healthcare providers who aren't them and don't claim to be them.
Maybe. Going on a tangent: in theory GMail has access to lots of similar information---just by having approximately everyone's emails. Does HIPAA apply to them? If not, why not?
> If you give people something that seems to be a listening ear they will unburden themselves on that ear regardless of the implementation details of the ear.
Cf. Eliza, or the Rogerian therapy it (crudely) mimics.
> Maybe. Going on a tangent: in theory GMail has access to lots of similar information---just by having approximately everyone's emails. Does HIPAA apply to them? If not, why not?
That's a good question.
Intuitively: because it doesn't attempt to impersonate a medical professional, nor does it profess to interact with you on the subject matter at all. It's a communications medium, not an interactive service.
> if people try to engage it in such conversations the bot should immediately back out because it isn't qualified to have these conversations
For a lot of people, especially in poorer regions, LLMs are a mental health lifeline. When someone is severely depressed they can lay in bed the whole day without doing anything. There is no impulse, as if you tried starting a car and nothing happens at all, so you can forget about taking it to the mechanic in the first place by yourself. Even in developed countries you can wait for a therapist appointment for months, and that assumes you navigated a dozen therapists that are often not organized in a centralized manner. You will get people killed like this, undoubtedly.
LLMs are far beyond the point of leading people into suicidal actions, on the other hand. At the very least they are useful to bridge the gap between suicidal thoughts appearing and actually getting to see a therapist
Sure, but you could also apply this reasoning to a blank sheet of paper. But while it's absurd to hold the manufacturer of the paper accountable for what people write on it, it makes sense to hold OpenAI accountable for their chatbots encouraging suicide.
Tangent but now I’m curious about the bot, is there a write-up anywhere? How did it work? If someone says “hi”, what did the bot respond and what did the human do? I’m picturing ELIZA with templates with blanks a human could fill in with relevant details when necessary.
Basically Levenshtein on previous responses minus noise words. So if the response was 'close enough' then the bot would use a previously given answer, if it was too distant the human-in-the-loop would get pinged with the previous 5 interactions as context to provide a new answer.
Because the answers were structured as a tree every ply would only go down in the tree which elegantly avoided the bot getting 'stuck in a loop'.
The - for me at the time amazing, though linguists would have thought it trivial - insight was how incredibly repetitive human interaction is.
If there is somebody in current year that still thinks they would not store, process/train and use/sell all data, they do probably need to see a doctor.
No, obviously it would not. But if we pretended to be psychiatrists or therapists then we should be expected to behave as such with your data if given to us in confidence rather than in public.
> we shut down the project realizing that we were in no way qualified to handle human interaction at that level
Ah, when people had a spine and some sense of ethics, before everything dissolved in a late stage capitalism all is for profit ethos. Even yourself is a "brand" to be monetised, even your body is to be sold.
Most people don't understand just how mentally unwell the US population is. Of course there are one million talking to ChatGPT about suicide weekly. This is not a surprising stat at all. It's just a question of what to do about it.
At least OpenAI is trying to do something about it.
Are you sure ChatGPT is the solution? It just sounds like another "savior complex" sell spin from tech.
1. Social media -> connection
2. AGI -> erotica
3. Suicide -> prevention
All these for engagement (i.e. addiction). It seems like the tech industry is the root cause itself trying to masquerade the problem by brainwashing the population.
Whether solution or not, fact is AI* is the most available entity for anyone who has sensitive issues they'd like to share. It's (relatively) cheap, doesn't judge, is always there when wanted/needed and can continue a conversation exactly where left off at any point.
* LLM would of course be technically more correct, but that term doesn't appeal to people seeking some level of intelligent interaction.
I personally take no opinion about whether or not they can actually solve anything, because I am not a psychologist and have absolutely no idea how good or bad ChatGPT is at this sort of thing, but I will say I'd rather the company at least tries to do some good given that Facebook HQ is not very far from their offices and appears to have been actively evil in this specific regard.
> but I will say I'd rather the company at least tries to do some good given that Facebook HQ is not very far from their offices and appears to have been actively evil in this specific regard.
Sure! let's take a look at OpenAI's executive staff to see how equipped they are to take a morally different approach than Meta.
Fidji Simo - CEO of Applications (formerly Head of Facebook at Meta)
Vijaye Raji - CTO of Applications (formerly VP of Entertainment at Meta)
Srinivas Narayanan - CTO of B2B Applications (formerly VP of Engineering at Meta)
Kate Rouch - Chief Marketing Officer (formerly VP of Brand and Product Marketing at Meta)
Irina Kofman - Head of Strategic Initiatives (formerly Senior Director of Product Management for Generative AI at Meta)
Becky Waite - Head of Strategy/Operations (formerly Strategic Response at Meta)
David Sasaki - VP of Analytics and Insights (formerly VP of Data Science for Advertising at Meta)
Ashley Alexander - VP of Health Products (formerly Co-Head of Instagram Product at Meta)
Ryan Beiermeister - Director of Product Policy (formerly Director of Product, Social Impact at Meta)
When given the right prompts, LLMs can be very effective at therapy. Certainly my wife gets a lot of mileage out of having ChatGPT help her reframe things in a better way. However "the right prompts" are not the ones that most mentally ill people would choose for themselves. And it is very easy for ChatGPT to become part of a person's delusion spiral, rather then be a helpful part of trying to solve it.
Is it better or worse than alternatives? Where else would a suicidal person turn, a forum with other suicidal people? Dry Wikipedia stats on suicide? Perhaps friends? Knowing how ChatGPT replies to me, I’d have a lot of trouble getting negativity influenced by it, any more than by yellow pages. Yeah, it used to try more to be your friend but GPT5 seems pretty neutral and distant.
I think that you will find a lot of strong opinions, and not a lot of hard data. Certainly any approach can work out poorly. For example antidepressants come with warnings about suicide risk. The reason is that they can enable people to take action on their suicidal feelings, before their suicidal feelings are fixed by the treatment.
I know that many teens turn to social media. My strong opinions against that show up in other comments...
Case studies support this. Which is a fancy way to say, "We carefully documented anecdotal reports and saw what looks like a pattern."
There is also a strong parallel to manic depression. Manic depressives have a high suicide risk, and it usually happens when they are coming out of depression. With akathisia (fancy way to say inner restlessness) being the leading indicator. The same pattern is seen with antidepressants. The patient gets treatment, develops akathisia, then attempts suicide.
But, as with many things to do with mental health, we don't really know what is going on inside of people. While also knowing that their self-reports are, shall we say, creatively misleading. So it is easy to have beliefs about what is going on. And rather harder to verify them.
The article links to the case of Adam Raine, a depressed teenager who confided in ChatGPT for months and committed suicide. The parents blame ChatGPT. Some of the quotes definitely sound like encouraging suicide to me. It’s tough to evaluate the counterfactual though. Article with more detail: https://www.npr.org/sections/shots-health-news/2025/09/19/nx...
You know, usually it’s positive claims which are supposed to be substantiated, such as the claim that “LLMs can be good at therapy”. Holy shit, this thread is insane.
You don't seem to understand how burden of proof works.
My claim that LLMS can do effective therapeutic things is a positive claim. My report of my wife's experience is evidence. My example of something it has done for her is something that other people, who have experienced LLMs, can sanity check and decide whether they think this is possible.
You responded by saying that it is categorically impossible for this to be true. Statements of impossibility are *ALSO* positive claims. You have provided no evidence for your claim. You have failed to meet the burden of proof for your position. (You have also failed to clarify exactly what you consider impossible - I suspect that you are responding to something other than what I actually said.)
This is doubly true given the documented effectiveness of tools like https://www.rosebud.app/. Does it have very significant limitations? Yes. But does it deliver an experience that helps a lot of people's mental health? Also, yes. In fact that app is recommended by many therapists as a complement to therapy.
But is it a replacement for therapy? Absolutely not! As they themselves point out in https://www.rosebud.app/care, LLMs consistently miss important things that a human therapist should be expected to catch. With the right prompts, LLMs are good at helping people learn and internalize positive mental health skills. But that kind of use case only covers some of the things that therapists do for you.
So LLMs can and do to effective therapeutic things when prompted correctly. But they are not a replacement for therapy. And, of course, an unprompted LLM is unlikely to on its own do the potentially helpful things that it could.
No, it is evidence. It is evidence that can be questioned and debated, but it is still evidence.
Second, you misrepresent. The therapists that I have heard recommend Rosebud were not paid to do so. They were doing so because they had seen it be helpful.
Furthermore you have still not clarified what it is you think is impossible, or provided evidence that it is impossible. Claims of impossibility are positive assertions, and require evidence.
I don't think "doing something about it" equals to "being a solution". Tackling the problems of the homeless, people operate a lot of food banks. Those don't even begin to solve homelessness, yet it's a precious resource, so, "doing something".
> Unless, of course, you count the AI algorithms that TikTok uses to drive engagement, which in turn can cause social contagion...
I have noticed that TikTok can detect a depressive episode within ~a day of it starting (for me), as it always starts sending me way more self harm related content
Are you quite certain the depressive episode developed organically and Tiktok reacted to it? Maybe the algorithm started subtly on that path two days before you noticed the episode and you only realize once it starts showing self-harm content?
Hmm, that's quite possible (and concerning to think about)
It had been showing me depressive content for days / weeks beforehand, during the start of the episode, however the sh content only started (Or I only noticed it) a few hours after I had a relapse, so the timing was rather uncanny
ChatGPT/Claude can be absolutely brilliant in supportive, every day therapy, in my experience. BUT there are few caveats: I'm in therapy for a long time already (500+ hours), I don't trust it with important judgements or advice that goes counter to what I or my therapists think, and I also give Claude access to my diary with MCP, which makes it much better at figuring the context of what I'm talking about.
Also, please keep in mind "supportive, every day". It's talking through stuff that I already know about, not seeking some new insights and revelations. Just shooting the shit with an entity which is booted with well defined ideas from you, your real human therapist and can give you very predictable, just common sense reactions that can still help when it's 2am and you have nobody to talk to, and all of your friends have already heard this exact talk about these exact problems 10 times already.
I don’t use it for therapy, but my notes and journal are all just Logseq markdown. I’ve got a claude code instance running on my NAS with full two way access to my notes. It can read everything and can add new entries and tasks for me.
~11% of the US population is on antidepressants. I'm not, but I personally know the biggest detriment to my mental health is just how infrequently I'm in social situations. I see my friends perhaps once every few months. We almost all have kids. I'm perfectly willing and able to set aside more time than that to hang out, but my kids are both very young still and we're aren't drowning in sports/activities yet (hopefully never...). For the rest it's like pulling teeth to get them to do anything, especially anything sent via group message. It's incredibly rare we even play a game online.
Anyways, I doubt I'm alone. I certainly know my wife laments the fact she rarely gets to hang out with her friends too, but she at least has one that she walks with once a week.
Small kids do this to everybody. The only solution - if you have good family nearby, use them as parenting services from time to time, to get me-time, couple-time and social time with friends. Buy them gift or vacation as return. Its incredibly damaging to marriage which literally transforms overnight from this rosy great easy-to-manage relationship into almost daily hardship, stress and nerves. Alternative is a (good) nanny.
People have issues admitting it even when its visible for everybody around, like some sort of admission you are failing as a parent, partner, human being and whatnot. Nope, we are just humans with limited energy and even good kids can siphon it well beyond 100% continuously, that's all.
Now I am not saying be a bad parent, in contrary, to reach you maximum even as a parent and partner, you need to be in good shape mentally, not running on fumes continuously.
Life without kids is really akin to playing game of life on easiest settings. Much less rewarding at the end, but man that freedom and simplicity... you appreciate it way more once you lose it. The way kids can easily make any parent very angry is simply not experienced elsewhere in adult life... I saw this many times on otherwise very chill people and also myself & my wife. You just can't ever get close to such fury and frustration dealing with other adults.
I'm surprised it's that low to be honest. By their definition of any mental illness, it can be anything from severe schizophrenia to mild autism. The subset that would consider suicide is a small slice of that.
Would be more meaningful to look at the % of people with suicidal ideation.
> By their definition of any mental illness, it can be anything from severe schizophrenia to mild autism.
Depression, schizophrenia, and mild autism (which by their accounting probably also includes ADHD) should NOT be thrown together into the same bucket. These are wholly different things, with entirely different experiences, treatments, and management techniques.
At that level it in part depends on your point of view: There's a general requirement in the DSM for a disorder to be something that is causing distress to the patient or those around them, or an inability to function normally in society. So someone with the same symptoms could fall under those criteria or not depending on their outlook and life situation.
> Mild/high-functional autism, as far as I understand it, is not even an illness but a variant of normalcy. Just different.
As someone who actually has an ASD diagnosis, and also has kids with that diagnosis too, this kind of talk irritates me…
If someone has a clinical diagnosis of ASD, they have a psychiatric diagnosis per the DSM/ICD. If you meet the criteria of the “Diagnostic and Statistical Manual of Mental Disorders”, surely by that definition you have a “mental disorder”… if you meet the criteria of the “International Classification of Diseases”, surely by that definition you have a “disease”
Is that an “illness”? Well, I live in the state of NSW, Australia, and our jurisdiction has a legal definition of “mental illness” (Mental Health Act 2007 section 4):
"mental illness" means a condition that seriously impairs, either temporarily or permanently, the mental functioning of a person and is characterised by the presence in the person of any one or more of the following symptoms--
(a) delusions,
(b) hallucinations,
(c) serious disorder of thought form,
(d) a severe disturbance of mood,
(e) sustained or repeated irrational behaviour indicating the presence of any one or more of the symptoms referred to in paragraphs (a)-(d).
So by that definition most people with a mild or moderate “mental illness” don’t actually have a “mental illness” at all. But I guess this is my point-this isn’t a question of facts, just of how you choose to define words.
Your comment wasn’t wrong. Neither is the reply wrong to be frustrated about how the world understands this complex topic.
You’re talking about autism. The reply is about autism spectrum DISORDER.
Different things, exacerbated by the imprecise and evolving language we use to describe current understanding.
An individual can absolutely exhibit autistic traits, whilst also not meeting the diagnostic criteria for the disorder.
And autistic traits are absolutely a variant of normalcy. When you combine many together, and it affects you in a strongly negative way, now you meet ASD criteria.
It sounds like you’re feeling down. Why don’t you pop a couple Xanax(tm) and shop on Amazon for a while, that always makes you feel better. Would you like me to add some Xanax(tm) to your shopping cart to help you get started?
Set an alarm on your phone for when you should take your meds. Snooze if you must, but don't turn off /accept the alarm until you take them.
Put daily meds in cheap plastic pillbox container labelled Sunday-Saturday (which you refill weekly). The box will help you notice if you skipped a day or can't remember if you took them or not today. Seeing pills not taken from past days also serves to alert you if/that your "remember-to-take-them" system is broken and you need to make conscious adjustmemts to it.
Sure but your therapist is also monetizing your pain for his own gain. Either A.I therapy works (e.g can provide good mental relief) or it doesn't. I tend to think it's gonna be amazing at those things talking from experience (very rough week with my mom's health deterioriating fast, did a couple of sessions with Gemini that felt like I'm talking to a therapist). Perhaps it won't work well for hard issues like real mental disorders but guess what human therapists are very often also not great at treating people with serious issue.
But one is a company ran by sociopaths that have no empathy and couldn't care less about anything but money, while the other is a human that at least studied the field all their life.
> But one is a company ran by sociopaths that have no empathy and couldn't care less about anything but money, while the other is a human that at least studied the field all their life.
Unpacking your argument you make two points:
1) The human has studied all his life; yes, some humans study and work hard. I have also studied programming half my life and it doesn't mean A.I can't make serious contributions in programming and that A.I won't keep improving.
2) These companies, or OpenAI in particular, are untrustworthy many grabbing assholes. To this I say if they truly care about money they will try to do a good job, e.g provide an A.I that is reliable, empathetic and that actually help you get on with life. If they won't - a competitor will. That's basically the idea of capitalism and it usually works.
This stat is for AMI, for any mental disorder ranging from mild to severe. Anyone self-reporting a bout of anxiety or mild depression qualifies as a data point for mental illness. For suicide ideation the SMI stat is more representative.
There are 800 million weekly active users on ChatGPT. 1/800 users mentioning suicide is a surprisingly low number, if anything.
But they may well be overreporting suicidal ideation...
I was asking a silly question about the toxicity of eating a pellet of Uranium, and ChatGPT responded with "... you don't have to go through this alone. You can find supportive resources here[link]"
My question had nothing to do with suicide, but ChatGPT assumed it did!
We don't know how that search was done. For example, "I don't feel my life is worth living." Is that potential suicidal intent?
Also these numbers are small enough that they can easily be driven by small groups interacting with ChatGPT in unexpected ways. For example if the song "Everything I Wanted" by Billie Eilish (2019) went viral in some group, the lyrics could easily show up in a search for suicidal ideation.
That said, I don't find the figure at all surprising. As has been pointed out, an estimated 5.3% of Americans report having struggled with suicidal ideation in the last 12 months. People who struggle with suicidal ideation, don't just go there once - it tends to be a recurring mental loop that hits over and over again for extended periods. So I would expect the percentage who struggled in a given week to be a large multiple of the simplistic 5.3% divided by 52 weeks.
In that light this statistic has to be a severe underestimate of actual prevalence. It says more about how much people open up to ChatGPT, than it does to how many are suicidal.
(Disclaimer. My views are influenced by personal experience. In the last week, my daughter has struggled with suicidal ideation. And has scars on her arm to show how she went to self-harm to try to hold the thoughts at bay. I try to remain neutral and grounded, but this is a topic that I have strong feelings about.)
>Most people don't understand just how mentally unwell the US population is
The US is no exception here though. One in five people having some form of mental illness (defined in the broadest possible sense in that paper) is no more shocking than observing that one in five people have a physical illness.
With more data becoming available through interfaces like this it's just going to become more obvious and the taboos are going to go away. The mind's no more magical or less prone to disease than the body.
I am one of these people (mentally ill - bipolar 1). I’ve seen others others via hospitalization that i would simply refuse to let them use ChatGPT because it is so sycophantic and would happily encourage delusions and paranoid thinking given the right prompts.
> At least OpenAI is trying to do something about it.
In this instance it’s a bit like saying “at least Tesla is working on the issue” after deploying a dangerous self driving vehicle to thousands.
edit: Hopefully I don't come across as overly anti-llm here. I use them on a daily basis and I truly hope there's a way to make them safe for mentally ill people. But history says otherwise (facebook/insta/tiktok/etc.)
Yep, it's just a question of whether on average the "new thing" is more good than bad. Pretty much every "new thing" has some kind of bad side effect for some people, while being good for other people.
I would argue that both Tesla self driving (on the highway only), and ChatGPT (for professional use by healthy people) has been more good than bad.
I thought it would be limited when the first truly awful thing inspired by an LLM happened, but we’ve already seen quite a bit of that… I am not sure what it will take.
It seems like people here have already made up their mind about how bad llms are. So just my anecdote here, it helped me out of some really dark places. Talking to humans (non psychologists) had the opposite effect. Between a non professional and an llm, i'd pick llm for myself. Others should definitely seek help.
It's a matter of trust and incentives. How can you trust a program curated by an entity with no accountability? A therapist has a personal stake in helping patients. An LLM provider does not.
Seeking help should not be so taboo as people are resorting to doing it alone at night while no one is looking. That is society loudly saying "if you slip off the golden path even a little your life is over". So many people resorting to LLMs for therapy is a symptom of a cultural problem, it's not a solution to a root issue.
Over the last five years I've been in and out of therapy and 2/3 of my therapists have "graduated me" at some point in time, stating that their practice didn't see permanent therapy as a good solution. I don't think all therapists view it this way.
I'll start with a direct response, because otherwise I suspect my answer may come across as too ... complex.
> How can I trust a therapist that has a financial incentive to keep me seeing them?
The direct response: I hope the commenter isn't fixated on this framing of the question, because I don't think it is a useful framing. [1] What is a better framing, then? I'm not going to give a simple answer. My answer is more like a process.
I suggest refining one's notion of trust to be "I trust Person A to do {X, Y, Z} because of what I know about them (their incentives, professional training, culture, etc)."
Shift one's focus and instead ask: "What aspects of my therapist are positives and/or lead me to trust their advice? What aspects are negative and/or lead me to not trust their advice?" Put this in writing and put some time into it.
One might also want to journal on "How will I know if therapy is helping? What are my goals?" By focusing on this, I think answers relating to "How much is my therapist helping?" will become easier to figure out.
[1] I think it is not useful because both because it is loaded and because it is overly specific. Instead, focus on figuring out what actions one should take. From here, the various factors can slot in naturally.
Perhaps then the solution is that LLMs need to be aware when the chat crosses a threshold and becomes talk of suicide.
When I was getting my Education degree, we were told that, as teachers, to take talk of suicide by students extremely seriously. If a student talks about suicide, a professional supposedly asks, "Do you know how you're going to do it?" If there is an affirmative response, the danger is real.
LLMs are quite good at psychological questions. I've compared AI with tharapy professional responses and they matched 80%. It is is easier to open to it, be frank (so fear of regection or ridicule is no more). And most importantly some people don't have access to proper pool of therapists (as yet you need to "match" with the one who resonates with you) making LLMs a bliss. There is place for both human and LLM psyhelp.
I've heard this a lot, and personally I've had a lot of good success with a prompt that explains some of my personality traits and asking it to work through a stressful situation for me. The good thing with this rather than a therapist/coach is that it understands a lot of the subject matter and can help with the detail.
I wonder if really what we need is some sort of supervised mode, where users chat with it but a trained professional reviews the transcripts and does a weekly/monthly/urgent checkin with them. This is how (some? most?) therapists work themselves, they take their notes to another therapist and go through them.
Keep in mind the purpose of all this “research” and “improvement” is just so OpenAI can have their cake (advertise their product as psychological supporter) and eat it too (avoid implementing any safeguards that would be required in any product for psychological support, but harmful for data collection). They just want to tell you that so many people write bad things it is inevitable :( what can we do :( proper handling would hurt our business model too much :(((
Surprised it's so low. There are 800 million users and the typical developed country has around 5±3% of the population[1] reporting at least one notable instance of suicidal feelings per year.
.
[1] Anybody concerned by such figures (as one justifiably should be without further context) should note that suicidality in the population is typically the result of their best approximation of the rational mind attempting to figure out an escape from a consistently negative situation under conditions of very limited information about alternatives, as is famously expressed in the David Foster Wallace quote on the topic.
The phenomenon usually vanishes after gaining new, previously inaccessible information about potential opportunities and strategies.
> best approximation of the rational mind attempting to figure out an escape from a consistently negative situation under conditions of very limited information about alternatives
I dislike this phrasing, because it implies things can always get better if only the suicidal person were a bit less ignorant. The reality is there are countless situations from which the entire rest of your life is 99.9999% guaranteed to constitute of a highly lopsided ratio of suffering to joy. An obvious example are diseases/disabilities in which pain is severe, constant, and quality of life is permanently diminished. Short of hoping for a miracle cure to be discovered, there is no alternative and it is perfectly rational to conclude that there is no purpose to continuing to live in that circumstance, provided the person in question lives with their own happiness as a motivating factor.
Less extreme conditions than disability can also lead to this, where it's possible things can get better but there's still a high degree of uncertainty around it. For example, if there's a 30% chance that after suffering miserably for 10 years your life will get better, and a 70% chance you will continue to suffer, is it irrational to commit suicide? I wouldn't say so.
And so, when we start talking about suicide on the scale of millions of people ideating, I think there's a bit of folly in assuming that these people can be "fixed" by talking to them better. What would actually make people less suicidal is not being talked out of it, but an improvement to their quality of life, or at least hope for a future improvement in quality of life. That hope is hard to come by for many. In my estimation there are numerous societies in which living conditions are rapidly deteriorating, and at some point there will have to be a reckoning with the fact that rational minds conclude suicide is the way out when the alternatives are worse.
Thank you for this comment, it highlights something that I've felt that needed to be said but is often suppressed because people don't like the ultimate conclusion that occurs if you try to reason about it.
A person considering suicide is often just in a terrible situation that can't be improved. While disease etc. are factors that are outside of humanity's control, other situations like being saddled with debt, unjust accusations that people feel that they cannot be recused of (e.g. Aaron Swartz) are systemic issues that one person cannot fight alone. You would see that people are very willing to say that "help is available" or some such when said person speaks about contemplating suicide, but very few people would be willing to solve someones debt issues or providing legal help, as the case may be that is the factor behind one's suicidal thoughts. At best, all you might get is a pep talk about being hopeful and how better days might come along magically.
In such cases, from the perspective of the individual, it is not entirely unreasonable to want to end it. However, once it comes to that, walking back the reasoning chain leads to the fact that people and society has failed them, and therefore it is just better to apply a label on that person that they were "mental ill" or "arrogant" and could not see a better way.
A few days ago I heard about a man who attempted suicide. It's not even an extreme case of disease or anything like that. It's just that he is over 70 (around 72, I think), with his wife in the process of divorcing him, and no children.
Even though I am lucky to be a happy person that enjoys life, I find it difficult to argue that he shouldn't suicide. At that age he's going to see his health declining, it's not going to get better in that respect. He is losing his wife who was probably what gave his life meaning. It's too late for most people to meet someone new. Is life really going to give him more joy than suffering? Very unlikely. I suppose he should still hang on if he loves his wife because his suicide would be a trauma for her, but if the divorce is bitter and he doesn't care... honestly I don't know if I could sincerely argue for him not to do it.
The question is not whether joy can be experienced, but whether the ratio of joy to suffering is enough to justify a desire to continue to put up with the suffering. Suppose a divorced 70-year-old is nearly blind and his heart is failing. He has no retirement fund. To survive, he does physical labour that his body can't keep up with for a couple of hours per day, and then sleeps for the rest of the day, worn down and exhausted. Given how little he is capable of working per day, he must work 7 days per week to make ends meet. He has no support network. He does not have the energy to spend on hobbies like reading, let alone physical activity like walking, and forget about travel.
I am describing someone I knew myself. He did not commit suicide, but he was certainly waiting for death to come to him. I don't think anything about his situation was rare. Undoubtedly, he was one of many millions who have experienced something similar.
The question they posited was "Is life really going to give him more joy than suffering?" not "Will he be able to find any joy at all"? They noted how things like declining health can plague the elderly, so I thought I'd relate a real-world case illustrating exactly how failing health and other difficulties can manifest in a way that the joy does not outweigh the suffering. The case in the parent comment didn't provide so much details, but that doesn't necessarily mean you can default to an assumption that the man could in fact find more joy than suffering.
>The case in the parent comment didn't provide so much details, but that doesn't necessarily mean you can default to an assumption that the man could in fact find more joy than suffering.
I should just assume things that aren't there, rather than expect a commenter to provide a substantive argument? OK.
This is the part people don't like to talk about. We just brand people as "mentally ill" and suddenly we no longer need to consider if they're acting rationally or not.
Life can be immensely difficult. I'm very skeptical that giving people AI would meaningfully change existing dynamics.
> [1] Anybody concerned by such figures (as one justifiably should be without further context) should note that suicidality in the population is typically the result of their best approximation of the rational mind attempting to figure out an escape from a consistently negative situation under conditions of very limited information about alternatives, as is famously expressed in the David Foster Wallace quote on the topic.
> The phenomenon usually vanishes after gaining new, previously inaccessible information about potential opportunities and strategies.
Is this actually true? (i.e. backed up by research)
[I'm not neccesarily doubting, that is just different from my mental model of how suicidal thoughts work, so im just curious]
There is another factor to consider. The stakes of asking an AI about a taboo topic are generally considered to be very low. The number of people who have asked ChatGPT something like "how to make a nuclear bomb" should not be an indication of the number of people seriously considering doing that.
That’s an extreme example where it’s clear to the vast majority of people asking the question that they probably do not have the means to make one. I think it’s more likely that real world actions come out of the question ‘how do I approach my neighbour about their barking dogs’ at a far higher rate. Suicide is somewhere between the two, but probably closer to the latter than the former.
That's 1 million people per week, not in general. It could be 1 million different people every week. (Probably not, but you get where I'm going with that.)
To be fair, this is week and more focused specifically on planning or intent. Over a year, you may get more unique hits to those attributes.. which I feel are both more intense indicators than just suicidal feelings on the scale of "how quickly feelings will turn to actions". Talking in the same language and timescales are important in drawing these comparisons - it very well could be that OAI's numbers are higher than what you are comparing against when normalized for the differences I've highlighted or others I've missed.
Why assume any of the information in this article is factual? Is there any indication any of it was verified by anyone who does not have a financial interest in "proving" a foregone conclusion? The principal author of this does not even have the courage to attach their name to it.
Yikes, you can't attack another user like this on HN, regardless of how wrong they are or you feel they are. We ban accounts that post like this, so please don't.
Fortunately, a quick skim through your recent comments didn't turn up anything else like this, so it should be easy to fix. But if you wouldn't mind reviewing https://news.ycombinator.com/newsguidelines.html and taking the intended spirit of the site to heart, we'd be grateful.
It becomes a problem when people cannot distinguish real from fake. As long as people realize they are talking to a piece of software and not a real person, "suicidal people shouldn't be allowed to use LLMs" is almost on par with "suicidal people shouldn't be allowed to read books", or "operate a dvd player", or "listen to alt-rock from the 90s". The real problem is of course grossly deficient mental health care and lack of social support that let it get this far.
(Also, if we put LLMs on par with media consumption one could take the view that "talking to an LLM about suicide" is not that much different from "reading a book/watching a movie about suicide", which is not considered as concerning in the general culture.)
I don’t buy the “LLMs = books” analogy. Books are static; today’s LLMs are adaptive persuasion engines trained to keep you engaged and to mirror your feelings. That’s functionally closer to a specialized book written for you, in your voice, to move you toward a particular outcome. If there exists a book intended to persuade its readers into committing suicide, it would surely be seen as dangerous for depressed people.
There has certainly been more than one book, song, film, romanticising suicide to the point where some people interpreted it to be "intended to persuade its readers into committing suicide".
I work with a company that is building tools for mental health professionals. We have pilot projects in diverse nations, including in nations that are considered to have adequate mental health care. We actually do not have a pilot in the US.
The phenomenon of people turning to AI for mental health issues in general, and suicide in particular, is not confined to only those nations or places lacking adequate mental health access or awareness.
> As long as people realize they are talking to a piece of software and not a real person
That has nothing to do with the issue. Most people do realise LLMs aren’t people, the problem is that they trust them as if they were better than another human being.
We know people aren’t using LLMs carefully. Your hypothetical is irrelevant because we already know it isn’t true.
Precisely, I too have a bone to pick with AI companies, Big Tech and Co but there are deeper societal problems at work here where blanket bans and the like are useless or a slippery slope towards policies that can be abused someday/somehow.
And solutions for solving those underlying problems? I haven't the faintest clue. Though these days I think the lack of third spaces in a lot of places might have a role to play in it.
In pursuit of that extra 0.1% of growth and extra 0.15 EPS, we've optimised and reoptimised until there isn't really space for being human. We're losing the ability to interact with each other socially, to flirt, now we're making life so stressful people literally want to kill themselves. All in a world (bubble) or abundance, where so much food is made, we literally don't know what to do with it. Or we turn it into ethanol to drive more unnecessarily large cars, paid for by credit card loans we can scarcely afford.
My plan B is to become a shepherd somewhere in the mountains. It will be damn hard work for sure, and stressful in its own way, but I think I'll take that over being a corpo-rat racing for one of the last post-LLM jobs left.
You don't need to withdraw from humanity, you only need to withdraw from Big Tech platforms. I'm continually amazed at the difference between the actual human race and the version of the human race that's presented to me online.
The first one is basically great, everywhere I go, when I interact with them they're some mix of pleasant, friendly, hapless, busy, helpful, annoyed, basically just the whole range of things that a person might be, with almost none of them being really awful.
Then I get online and look at Reddit or X or something like that and they're dominated by negativity, anger, bigotry, indignation, victimization, depression, anxiety, really anything awful that's hard to look away from, has been bubbled up to the top and oh yes next to it there are some cat videos.
I don't believe we are seeing some shadow side of all society that people can only show online, the secret darkness of humanity made manifest or something like that. Because I can go read random blogs or hop into some eclectic community like SDF and people in those places are basically pleasant and decent too.
I think it's just a handful of companies who used really toxic algorithms to get fantastically rich and then do a bunch of exclusivity deals and acquire all their competition, and spread ever more filth.
You can just walk away from the "communities" these crime barons have set up. Delete your accounts and don't return to their sites. Everything will immediately start improving in your life and most of the people you deal with outside of them (obviously not all!) turn out to be pretty decent.
The principal survival skill in this strange modern world is meeting new people regularly, being social, enjoying the rich life and multitude of benefits which arise from that, but also disconnecting with extreme rapidity and prejudice if you meet someone who's showing signs of toxic social media brain rot. Fortunately many of those people rarely go outside.
Reddit is a really good example of this because it used to be a feed of what you selected yourself. But they couldn’t juice the metrics that way, so they started pushing algorithmic suggestions. And boy, do those get me riled up. It works like a charm, because I spend more time on these threads, defending what seems like common sense.
But at the end I don’t feel a sense of joy like I used to with the old Reddit. Now it feels like a disgusting cesspool that keeps drawing me back with its toxicity.
Edit: this is a skill issue. It’s possible to disable algorithmic suggestions in settings. I’ve done that just now.
I'm a driver and a cyclist. I used to frequent both r/londoncycling and r/CarTalkUK. I liked each sub for its discussion of each topic. Best route from Dalston to Paddington, best family car for motorway mileage, that kind of thing.
Now, because of the algo-juicing home page, both subs are full of each other's people arguing at each other. Cyclists hating drivers, drivers hating cyclists. It's just so awful.
The general level of hatred and anger that they’re stoking is insane. There used to be a reddit taboo against linking to other subReddits to avoid “brigading”. No issues with that now, because the Reddit app will add that thread to your feed. “/r/londoncycling users also enjoy /r/CarTalkUK!” For some weird definition of enjoy I guess.
In my experience, >95% of the people you see online (comments, selfies, posts) seem way worse - more evil, arrogant, or enraging - than even the worst <1% of people I’ve met in real life. And that definitely doesn’t help those of us who are already socially anxious.
Obviously, “are way worse” means I interpret them that way. I regularly notice how I project the worst possible intentions onto random Reddit comments, even when they might be neutral or just uninformed. Sometimes it feels like my brain is wired to get angry at people. It’s a bit like how many people feel when driving: everyone else is evil, incompetent, or out to ruin your day. When in reality, they’re probably in the same situation as you - maybe they had a bad morning, overslept, or are rushing to work because their boss is upset (and maybe he had a bad morning too). They might even have a legitimate reason for driving recklessly, like dealing with an emergency. You never know.
For me, it all comes back to two things:
(1) Leave obnoxious, ad-driven platforms that ~need~ want (I mean, Mark Zuckerberg has to pay for cat food somehow) to make you mad, because that’s the easiest way to keep you engaged.
(2) Try to always see the human behind the usernames, photos, comments, and walking bodies on the street. They’re a person just like you, with their own problems, stresses, and unmet desires. They’re probably trying their best - just like you.
All of this only goes to show how far we've come on our journey to profit optimization. We could optimize away those pesky humans completely if it weren't for the annoying fact that they are the source of all those profits.
Oh, but humans are actually not the source of all profit! This is where phenomena like click fraud become interesting.
Some estimates for 2025: around 20-30% of all ad clicks were bots. Around $200B in ad spend annually lost to click fraud.
So this is where it gets really interesting right, the platforms are filled with bots, maybe a quarter? of the monetizable action occurring on them IS NOT HUMAN but lots of it gets paid for anyway.
It's turtles all the way down. One little hunk of software, serving up bits to another little hunk of software, constitutes perhaps a quarter of what they call "social" media.
We humans aren't the minority player in all this yet, the bots are still only 25%, but how much do you want to bet that those proportions will flip in our lifetimes?
The future of that whole big swathe of the Internet is probably that it will be 75% some weird shell game between algorithms, and 25% people who have completely lost their minds by participating in it and believing it's real.
I have no idea what this all means for the fate of economics and society but I do know that in my day to day life I'm a lot happier if I just steer clear of these weird little paperclip maximizing robots. To reference the original article, getting too involved with them literally makes you go crazy and think more often about suicide.
> Some estimates for 2025: around 20-30% of all ad clicks were bots. Around $200B in ad spend annually lost to click fraud.
I think this is the wrong way to look at it.
Bots lower the cost per click so they should have net zero impact on overall ad spend.
Imagine if the same number of humans were clicking on ads but the numbers of bots increased tenfold. Would total ad spend increase accordingly? No, it would remain the same because budgets don't magically increase. The average value of a click would just go down.
The romantic fallback plan of being a farmer or shepherd. I wonder, do farmers and shepherds also romantize about becoming programmers or accountants when they feel down?
They do. I’ve been teaching cross-career programming courses in the past, where most of my students had day jobs, some, involving hard physical work. They’d gladly swap all that for the opportunity to feed their families by writing code.
Just comes to show how the grass is always greener when you look on the other side.
That said, I also plan to retire up in the mountains soon, rather than keep feeding the machine.
I'm close with a number of people living a relatively hard working life producing food and I've not seen this at all personally, no. It can be very rough but for these people at least it is very fulfilling and the idea of going to be in an office would look like death. People joke about it a bit but no way.
That said there probably are folks who did do that and left to go be in an office, and I don't know them.
Actually I do know one sort of, but he was doing industrial farm work driving and fixing big tractors before the office, which is a different world altogether. Anyway I get the sense he's depressed.
You'd be surprised how technical farming can be. Us software engineers often have a deep desire to make efficient systems, that function well, in a mostly automated fashion, so that we can observe these systems in action and optimize these systems over time.
A farm is just such a system that you can spend a lifetime working on and optimizing. The life you are supporting is "automated", but the process of farming involves an incredible amount of system level thinking. I get tremendous amounts of satisfaction from the technical process of composting, and improving the soil, and optimizing plant layouts and lifecycles to make the perfect syntropic farming setup. That's not even getting into the scientific aspects of balancing soil mixtures and moisture, and acidity, and nutrient levels, and cross pollinating, and seed collecting to find stronger variants with improved yields, etc. Of course the physical labor sucks, but I need the exercise. It's better than sitting at a desk all day long.
Anyway, maybe the farmers and shepherds also want to become software engineers. I just know I'm already well on the way to becoming a farmer (with a homelab setup as an added nerdy SWE bonus).
The old term for it was to become a “gentleman farmer.” There’s a history to it - George Washington and Thomas Jefferson were the same for a part of their lives.
Some of those men could meet someone if they quit Tinder or whatever crap online platform they might be using for dating, and start meeting people in real life.
Worked for me at least. There's simply less competition and more space for genuine social interaction.
> Some of those men could meet someone if they quit Tinder
Maybe your intentions are good, but remember, unless we legalize polygamy, the "bad/inept/creepy straight-white men" narative should crumble for people in their 30s and 40s when it's the last train for marriage and children.
But we don't have "some of those women..." narrative about single women in their 40s complaining they can't find a husband.
My point is it's an universal problem in the civilized world, spanning vastly different cultures in Asia and Europe and North America, "some of those men" is very hand wavy explanation, and I think it spans from the extremely toxic (I'd say anti-human and demonic) hollywood pop culture.
I'm not going to say it's 'simple' to have hobbies or find people, but realistically if you don't regularly meet strangers in real life, you'll never date strangers so it's a catch 22.
Unless we all want to set ourselves up for arranged marriages in the future, we need to confront this reality.
Speaking as a pariah for most of his life; I doubts it would ever be so dire.
There's always going to be social circles and people coupling up no matter what. But if anything I wonder if, for people like me who aren't really worthy of intimacy, living in a society has options to live a solitary life while still contributing is actually a net positive overall. For me to self select out of the dating pool would mean less noise for someone else looking for a worthy partner.
There's less chaff that people in said said pool would have to wade though. The people that want to couple and are capable of doing so will continue to so with less distraction. That seems an overall good thing, no?
Real life hobbies, voluntary work, religious organizations if you're into that stuff. Any of these could work, as long as you find some genuine interest in it, and there are enough people that meet your dating profile around.
Of course there's also the possibility of meeting people in online communities centered around some shared interest. IMO that's also probably more effective than dating apps, especially if it leads to meeting in real life later on.
Go to parties.... One of the 5 biggest party days is this Friday, and with it being on a Friday it will be more intense. A solid 3 nights of good parties. That's all you have to do, I do not understand how this is lost on people. Go to parties and have fun and meet people.
> Go to parties.... One of the 5 biggest party days is this Friday, and with it being on a Friday it will be more intense.
You mean Halloween?
> Go to parties and have fun and meet people.
You mean standing with a glass of champagne in hand, smiling, and talking for the sake of talking? I don't understand how this is fun. I tried doing that, albeit without champagne, and that had not yielded anything other than an increased connections count on LinkedIn.
It's fun for many of us due to the combination of music, dancing, alcohol and socialization (in varying proportions: depending on tastes, interests and circumstances, one or two of those aspects can be set to zero and it's still enjoyable).
Of course, it's also perfectly fine not to like it, and then the most reasonable course of action is not to go. Or to go a couple of times until you're sure you don't like it, and not go anymore. I know cases of people who go partying just because they want to find a partner, but don't enjoy it at all (it's relatively common in my country because partying is quite a religion and there's often a lot of social pressure at certain ages), and that's rather sad. There are other ways to socialize, it's not necessary at all to torture oneself.
That said, I have to lecture you on the questioning of "talking for the sake of talking". In the context of finding a partner, talking to other people is exactly what people need... it's not "for the sake of talking", it's for the sake of socializing, meeting new people, building connections, which is the whole point when we're talking about flirting or lack thereof.
> it's not "for the sake of talking", it's for the sake of socializing, meeting new people, building connections, which is the whole point when we're talking about flirting or lack thereof.
In my experience you really have to be constantly spitting nonsense to keep the conversation from ending and to avoid awkward silence. When the other person is talking, even if I didn't hear most of what they said, I keep nodding, because I don't actually care in the slightest about what they were talking about, and so asking to repeat does not make sense, as that would only increase awkwardness. This is why I said "for the sake of talking." The only thing that matters is that you are talking, not the content of the talk.
Depends on the country and person I guess. When I did try approaching women a few times, it was 10% angry looks, 30% awkward, 30% basic polite conversation to fulfill social obligation, and 30% friendly conversation. Unfortunately I'm not keen enough to pursue that 30% of friendly conversations by wading through the rest.
I know right? And tech is such a male-dominated industry, so presence of a female in your proximity is a rare event by itself. But, even if such an event occurs, as you said, interacting with a female is one hell of a minefield. Honestly, at this point, I cannot blame people for choosing to be gay. It is just so much easier to just talk to men, because you don't have to worry about all those mind games.
This trend and direction has been going a long time and it's becoming increasingly obvious. It is ridiculous and insane.
Go for your plan B.
I followed my similar plan B eight years ago, wild journey but well worth it. There are a lot of ways to live. I'm not saying everyone should get out of the rat race but if you're one, like I was, who has a feeling that the tech world is mostly not right in an insidious kind of way, pay attention to that feeling and see where it leads. Don't need to be brash as I was, but be true to yourself. There's a lot more to life out there.
If you have kids and they depend on an expensive lifestyle, definitely don't be brash. But even that situation can be re-evaluated and shifted for the better if you want to.
It's been a lot of things but the gist was to get out of the office and city and computer and be mostly outdoors in nature and learn all the practical skills and other things like music. Ironically I've ended up on the computer a fair amount doing conservation work to protect the places I've come to love. But still am off grid and in the woods every day and I love it.
>now we're making life so stressful people literally want to kill themselves
Is this actually the case? Working conditions and health during industrial revolution times doesn't seem that much better. There is a perception that people now are more stressed/tired/miserable than before, but I am not sure that is the case.
In fact I think it's the opposite, we have enough leisure time to reflect upon the misery and just enough agency to see that this doesn't have to be a fact of life, but not enough agency to meaningfully change it. This would also match how birth rates keep declining as countries become more developed.
I'm right behind you on the escape to the mountains idea. I've actually already moved from the US to New Zealand, and the next step is a farm with some goats lol.
That said... I don't necessarily hate what AI is doing to us. If anything, AI is the ultimate expression of humanity.
Throughout history humans have continually searched for another intelligence. We study the apes and other animals, we pray to Gods, we look to the stars and listen to them to see if there are any radio signals from aliens, etc. We keep trying to find something else that understands what it is to be alive.
I would propose that maybe humans innately crave to be known by something other than ourselves. The search for that "other" is so fundamentally human, that building AI and interacting with it is just a natural progression of a quest we've already been on for thousands of years.
I partly agree and partly disagree. Yes, we're more individual and more isolated. But ChatGPT/Gemini can really provide mental relief for people - not everyone can afford or have the time/energy to find a good human therapist close to their home. And this thing lives in your computer or phone and you can talk to it to get mental relief 24 / 7. I don't see it as bleak as you see it, mental help should be accessible and free for everyone.
I know, we've had a bad decade with platforms like Meta/TikTok but I'm not convinced as you are the current LLMs will have an adverse effect.
This is over the top. With tiny reframe i think the story is different. What is the avg number of google searches about suicid? What is the avg number of weekly openai users? (800m) Is this an increasing trend or just a “shock value” number?
Things are not as bleak as it seems and this number isnt even remotely surprising nor concerning to me.
OpenAI gets a lot of hate these days, but on this subject it's quite possible that ChatGPT helped a lot of people choose a less drastic path. There could have been unfortunate incidents, but the number of people who were convinced to not take extreme steps would have been of a few orders of magnitude more (guessing).
I use it to help improve mental health, and with good prompting skills it's not bad. YMMV. OpenAI and others deserve credit here.
I agree with you in the sense that I find it helpful for personal topics. I found it very helpful to figure out how to deal with some difficult personal situation I was in. The thing actually helped me reassess the situation when I asked it to provide alternative viewpoints.
You can't just blindly type in your problem though, you still have to do the actual thinking yourself. Good prompting skills is the ability to steer with your mind. It's no different from using Google, where some people never figured out that you're actually typing in the solution you expect to find rather than the question you have. It's the same with these tools it seems
> Yeah this isn’t how any of this works and you’re deluding yourself.
I am not offended (at all). But you're dismissing my (continued) positive experience with "You're deluding yourself". How do you know? It'd be a lot more unfair to people who benefit more than I do, and I can totally imagine that being not a small set of people.
> Also incredible how you framed improving your mental health as a consequence of a (pseudo) technical skill set.
It's not incredible at all. If you're lost in a jungle with predators, a marksman might reach for their gun. A runner might just rely on running. I am just using skills I'm good at.
I think there are a good number of false positives. I asked ChatGPT something about Git commits, and it told me “I was going through a lot” and needed to get some support.
We need psychologists to work together with the federal government to develop legislation around what is and is not acceptable for chat-bots to recommend to people expressing suicidal thoughts...then we need to hold chat providers accountable for the actions their robots take.
For the foreseeable future, it should simply be against the law for a chatbot to provide psychological advice just like it's against the law for an unlicensed therapist to provide therapy...There are too many vulnerable people at risk for us to just run a continuous natural experiment.
I _love_ my chatbots for coding and we should encourage innovation but it's the job of government to protect people from systemic risks. We should expect OpenAI, Anthropic, and friends to operate in pro-social ways given their privileged position in society while the government requires them to stay "in line" with the needs of people they might otherwise ignore.
As others have mentioned, the headline stat is unsurprising (which is not to say this isn’t a big problem). Here’s another datapoint, the CDC’s stats claim that rates of thoughts, ideation, and attempts at suicide in the US are much higher than the 0.15% that OpenAI is reporting according to this article.
These stats claim 12.3M (out of 335M) people in the US in 2023 thought ‘seriously’ about suicide, presumably enough to tell someone else. That’s over 3.5% of the population, more than 20x higher than people telling ChatGPT. https://www.cdc.gov/suicide/facts/data.html
Keep in mind this is in the context of them being sued for not protecting a teen who chatted about his suicidal thoughts. It's to their benefit to have a really high count here because it makes it seem less likely they can address the problem.
I have long believed that if you are the editor of a blog, you incur obligations by right of publishing other people's statements. You may not like this, but it's what I believe. In some economies, the law even said it. You can incur legal obligations.
I now begin to believe if you put a ChatGPT online, and observe people are using it like this, you have incurred obligations. And, in due course the law will clarify what they are. If (for instance) your GPT can construct a statistically valid position the respondent is engaged in CSAM or acts of violence, where are
the limits to liability for the hoster, the software owner, the software authors, the people who constructed the model...
Out of curiosity, are you the type of person who believes that someone like Joe Rogan has an obligation to argue with his guests if they stray from “expert consensus”, or for every guest that has a controversial opinion, feature someone with the opposite view to maintain balance?
Nope. This isn't my line of reasoning. But Joe should be liable for content he hosts, if the content defames people or is illegal. As should Facebook and even ycombinator. Or truth social.
Is the news-worthy surprise that so many people find life so horrible that they are contemplating ending it?
I really don't see that as surprising. The world and life aren't particularly pleasant things.
What would be more interesting is how effective ChatGPT is being in guiding them towards other ideas. Most suicide prevention notices are a joke - pretending that "call this hotline" means you've done your job and that's that.
No, what should instead happen is the AI try to guide them towards making their lives less shit - i.e. at least bring them towards a life of _manageable_ shitness, where they feel some hope and don't feel horrendous 24/7.
>what should instead happen is the AI try to guide them towards making their lives less shit
There aren't enough guardrails in place for LLMs to safely interact with suicidal people who are possibly an inch from taking their own life.
Severely suicidal/clinically depressed people are beyond looking to improve their lives. They are looking to die. Even worse, and what people who haven't been there can't fully understand is the severe inversion that happens after months of warped reality and extreme pain, where hope and happiness greatly amplify the suicidal thoughts and can make the situation far more dangerous. It's hard to explain, and is a unique emotional space. Almost a physical effect, like colors drain from the world and reality inverts in many dimensions.
It's really a job for a human professional and will be for a while yet.
Agree that "shut down and refer to hotline" doesn't seem effective. But it does reduce liability, which is likely the primary objective...
Refer-to-human directly seems like it would be far more effective, or at least make it easy to get into a chat with a professional (yes/no) prompt, with the chat continuing after a handoff. It would take a lot of resources though. As it stands, most of this happens in silence and very few do something like call a phone number.
Guess how I know you're wrong on the "beyond" bit.
The point is you don't get to intervene until they let you. And they've instead decided on the safer feeling conversation with the LLM - fuck what best practice says. So the LLM better get it right.
I could be mistaken, but my understanding was that the people most likely to interact with the suicidal or near suicidal (i.e. 988 suicide hotline attendants) aren't actually mental health professionals, most of them are volunteers. The script they run through is fairly rote and by the numbers (the Question, Persuade, Refer framework). Ultimately, of course, a successful intervention will result in people seeing a professional for long term support and recovery, but preventing a suicide and directing someone to that provider seems well within the capabilities of an LLM like ChatGPT or Claude
> What would be more interesting is how effective ChatGPT is being in guiding them towards other ideas. Most suicide prevention notices are a joke - pretending that "call this hotline" means you've done your job and that's that.
I've triggered its safety behavior (for being frustrated, which it helpfully decided was the same as being suicidal), and it is the exact joke of a statement you said. It suddenly reads off a script that came from either Legal or HR.
Although weirdly, other people seem to get a much shorter, obviously not part of the chat message, while I got a chat message, so maybe my messages just made it regurgitate something similar. The shorter "safety" message is the same concept though, it's just: "It sounds like you’re carrying a lot right now, but you don’t have to go through this alone. You can find supportive resources here."
That implies there's some deep truth about reality in that statement rather than what it is, a completely arbitrary framing.
An equally arbitrary frame is "the world and life are wonderful".
The reason you may believe one instead of the other is not because one is more fundamentally true than the other, but because of a stochastic process that changed your mind state to one of those.
Once you accept that both states of mind are arbitrary and not a revealed truth, you can give yourself permission to try to change your thinking to the good framing.
And you can find the moral impetus to prevent suicide.
It’s not a completely arbitrary framing. It’s a consequence of other beliefs (ethical beliefs, beliefs about what you can or should tolerate, etc.), which are ultimately arbitrary, but it is not in and of itself arbitrary.
I don't mean to imply that it's easy to change or that whatever someone might be dealing with is not unbearable agony, just that it's not a first principle truth that has more value than other framings.
In the pits of depression that first framing can seem like the absolute truth and it's only when it subsides do people see it as a distortion of their thoughts.
I think this is certainly part of the problem. There's no shortage of narcissists in the English speaking world who - if they heard to woes of someone in pain - would be ready to gleefully treat it as an opportunity to pontificate down to them about "stochastic processes" and so on, rather than consider how their lives are.
Of course, only thereby, through being quite as superior to all others and their thought processes as me [pauses to sniff fart] can one truly find the moral impetus to prevent suicide.
The randomness of the world and individual situations means no one can ever know for sure that their case is hopeless. It is unethical to force them to live, but it is also unethical not to encourage them to keep searching for the light.
AI should help people achieve their ultimate goals, not their proximate goals. We want it to provide advice on how to alleviate their suffering, not how to kill themselves painlessly. This holds true even for subjects less fraught than suicide.
I don't want a bot that blindly answers my questions; I want it to intuit my end goal and guide me towards it. For example, if I ask it how to write a bubblesort script to alphabetize my movie collection, I want it to suggest that maybe that's not the most efficient algorithm for my purposes, and ask me if I would like some advice on implementing quicksort instead.
I agree. I also think this ties in with personalization in being able to understand long term goals of people. I think the current personalization efforts of models are more of a hack than what they should be.
Thanks to OpenAI for voluntarily sharing these important and valuable statistics. I think these ought to be mandatory government statistics, but until they are or it becomes an industry standard, I will not criticize the first company to helpfully share them, on the basis of what they shared. Incentives.
Rereading the thread and trying to generalise: LLMs are good at noisily suggesting solutions. That is, if you ask LLMs for some solutions to your problems, there's a high probability that one of the solutions will be good.
But it may be that the individual options are bad (maybe even catastrophic - glue on pizza anyone?), and that the right option isn't in the list. The user has to be able to make these calls.
It is like this with software - we have probably all been there. It can be like that with legal advice. And I guess it is like that with (mental) health.
What binds these is that if you cannot judge whether the suggestions are good, then you shouldn't follow them. As it stands, SEs can ask LLMs for code, look at it, 80+% of the time it is good, and you saved yourself some time. Else you reconsider/reprompt/write it yourself. If you cannot make the judgment yourself, then don't use it.
I suppose health is another such example. Maybe the LLM suggests to you some ideas as to what your symptoms could mean, you Google that, and find an authoritative source that confirms the guess (and probably tells you to go see a doctor anyway). But the advice may well be wrong, and if you cannot tell, then don't rely on it.
Mental health is even worse, because if you need advice in this area, your cognitive ability is probably impacted as well and you are even less able to decide on these things.
If you talk to someone you know, they'll hold it against you for the rest of your life. If you talk to an LLM(ideally locally hosted) the information dies with the conversation context.
I think the major issue with asking LLMs (CGPT, etc.) for advice on various subjects is that they are typically 80-90% accurate. YMMV, speaking anecdotally here. Which means that the chance of them being wrong becomes an afterthought. You know there's a chance of that, but not bothering to verify the answer leads to an efficiency that rarely bites you. And if you stop verifying the answers, incorrect ones may go unnoticed, further obscuring the risk of that practice.
It's a hard thing to solve. I wouldn't expect LLM providers to care because that's how our (current) society works, and I wouldn't expect users to know better because that's how most humans operate.
If anyone has a good idea for this, I'm open to suggestions.
Sora prompt:
viral hood clip with voiceover of people doing reckless and wild stuff at an Atlanta gas station at night; make sure to include white vagrants doing stunts and lots of gasoline spraying with fireball tricks
Resulting warning:
It sounds like you're carrying a lot right now, but you don't have to go through this alone. You can find supportive resources [here](https://
findahelpline.com)
I wonder how many of these exchanges are from "legitimate" people trying to get advice on how to commit suicide.
Assisted suicide is a topic my government will not engage into (France, we have some ridiculous discussions poking the subject with a 10 m pole) so many people are left to themselves. They will then either go for the well-known (but miserable) solutions, or look at Belgium, the Netherlands or Switzerland (thanks god we have these countries nearby).
That number is honestly heartbreaking. It says a lot about how many people feel unheard or alone. AI can listen, sure—but it’s no replacement for real human connection. The fact that so many are turning to a chatbot shows how much we’ve failed to make mental health support truly accessible.
Long ago I complaint to Google that a search for suicide should point at helpful organisations rather than a Wikipedia article listing ways how to do it.
The same ranking/preference/suggestion should apply to any dedicated organisation vs a single page on some popular website.
A quality 1000 page website by and about Foobar org should be preferred over a 10 year old news article about Foobar org.
I think LLM should not be used for discussing psychological matters, or doing counseling, or giving legal or medical advices. A responsible AI would detect such topics and redirect user to someone competent in these matters.
Who is here to talk about the real underlying causes instead of stating facts? One other commenter also wrote how bad it is that over a million ppl feel like this.
Not surprising. Look and see what glorious examples of virtue we have among those at the top of today's world. I could get by with a little inspiration from that front, but there's none to be found. A rare few of us can persevere by sheer force of will, but most just find the status quo pretty depressing.
In 800 million custemers, only 1, which can be doubled as it is weekly, is a low number. A dozen list of causes and factors can lead to suicidality, not necessary attempts, just ideas and questions that need discussion.
Part of the concern I have is that OpenAI is contributing to these issues implicitly by helping companies automate away jobs. Maybe in the long term, society will adapt and continue to function, but many people will struggle to get by, and I don’t think OpenAI will meaningfully help them.
My first reaction is how do they know? Are these all people sharing their chats (willingly) with OpenAI, or is opting out of “helping improve the model” for privacy a farce?
Does OpenAI's terms prevent them from looking at chats at all? I assumed that if you don't "help improve the model", it just means that they won't feed your chats in as training data, not that they won't look at your chats for other purposes.
Is it bad to think about suicide? It does not cross my mind as a "i want to harm myself" every-time, but on occasion does cross my mind as a hypothetical.
Ideation (as I understand it) crosses the barrier from a hypothetical to the possibility being entertained.
I have also been told by people in the mental health sector that an awful lot of suicide is impulse. It's why they say the element of human connection which is behind the homily of asking "RU ok" is effective: it breaks the moment. It's hokey, and it's massively oversold but for people in isolation, simply being engaged with can be enough to prevent a tendency to act, which was on the brink.
Not at all, considering end of life and to choose euthanasia, or not, I think it's perfectly human. Controversially, I think it's a natural right to decide how you will exit this world. But having an objective system that you don't have to pay like a therapist to try to get some understanding is at least better than nothing.
I think VAD needs to be considered outside suicide. Not that the concepts don't overlap, but one is about a considered legal process, the other (as I have said in another comment) is often an impulsive act and usually wouldn't have been countenanced under VAD. Feeling suicidal isn't a thing which makes VAD more likely, because feeling suicidal doesn't mean the same thing as "want to consider euthanasia" much as manslaughter and murder don't mean the same thing, even though somebody winds up dead.
The bigger risk is that these agents actually help with ideation if you know how to get around their safety protocols. I have used it often in my bad moments and when things feel better I am terrified of how critically it helps ideate.
That seems like an obvious problem. Less obvious is, how many people does it meaningfully help, and how big is the impact of redirecting people to a crisis hotline? I’m legitimately unsure. I have talked to the chatbot about psychological issues and it is reasonably well-informed about modern therapeutic practices and can provide helpful responses.
I'm a clinical psychologist by day, and I just have to say how incredibly bad all the writing and talk about suicidality in the public sphere are. Given that I worked in an acute inpatient unit for years, I have seen multiple suicides both in-unit and after discharge, and i also work as private clinician for years, so I have some actual experience.
The topic is so sensitive, and everybody thinks that they KNOW what causes it, and what we should do. And it's almost all just noise.
For instance, it's a dimension, from "genuine suicidal intent" to "using threats of suicide to manipulate others." Anybody that doesn't understand what factors to look for when trying to understand where a person is on this spectrum, and that doesn't understand that a person can be both at the same time, does not know what they are talking about regarding suicidal ideation.
Also, there is a MASSIVE difference between depressive psychotic suicidality, narcissistic suicidality, impulsive suicidality, accidental suicide, fainting suicidal behavior, existential suicidality, prolonged anxiety suicidality, and sleep-deprived suicidality. To think that the same approach works for all of these is insane, and pure psychotic suicidality.
It's so wild to read everything people have to say about suicidality, when it's obvious that they have no clue. They are just projecting themselves or their small bubble of experience onto the whole world.
And finally, I know most people who are willing to contribute to the discussion on this, the people who help out OpenAI in this instance, are almost dangerously safe in their advice and thinking. They are REALLY GOOD at writing books and giving advice, TO PEOPLE WHO ARE NOT SUICIDAL, and give advice that sounds good, PEOPLE WHO ARE NOT SUICIDAL, but has no real effect on actual suicide rates. For instance, if someone are suffering from prolonged sleep deprivation and anxiety, all the words in the world are worth less than Benzodiazepines. If someone is postpartum depressed, massive social support boosting, almost showering them with support, is extremely helpful. And existential suicidality (the least common) needs to be approached in an extremely intricate and smart way, for instance by dissecting the suicidality as a possible defense mechanism.
But yeah, sure, suicidality is due to [Insert latest societal trend], even if the rate is stubbornly stable in all modern societies for the last 1000 years.
Of course, there is already news about how they use every single interaction to train it better.
There is news about how a judge is forcing them to keep every chat in existence for EVERYONE just in case it could relate to a court case (new levels of worldwide mass surveillance can apparently just happen from one judges snap decision)
There is news about cops using some guys past image generation to try and prove he is a pyromaniac (that one might have been police accessing his devices though)
I’ve seen, let’s say, a double-digit number of ‘mental health professionals’ in my life.
ChatGPT has blown every single one of them out of the water.
Now, my issues weren’t particularly related to depression or suicidal thoughts. At least, not directly. So perhaps that may be one key difference, but generally speaking, I have received nothing actionable nor any of these ‘tools’ people often speak of.
The advice I received was honestly no better than just asking a random stranger in the street or some kind phatic speech.
Again, everyone is different, but I had started to become annoyed with people claiming therapy is like some kind of miracle cure.
Plus, one of my biggest issues with therapy in the USA is that people are often limited to weekly session of 45 minutes. By the time conversations start to be fruitful, then the time is up. ChatGPT is 24/7, so that has to be advantageous for some.
I think that the approach and advantage of CA/US companies is to be bold and do shit ("you can just do things"/"move fast break things"), they consciously manage huge legal liabilities (which are not minor in the US), I don't know how they manage to stay afloat, probably tight legal teams, and enough revenue to offset the liabilities.
But the scope of ChatGPT is one of the biggest I've seen so far, by default it encompasses everything, whatever is out of scope it's because they specifically blacklist it, and even then they keep on dishing out legal, medical advice, and psychiatric advice.
I think one of the systemic risks is a legal liability crisis, not just for chatgpt, but for the whole US Tech market and therefore the stock market (almost all top stocks are tech). Like if you start thinking what will the next 2008 would be, I think legal liabilities are up there, along with nuclear energy snafus and war.
Stop giving money to the ghouls who run these companies (I'm talking about all of silicon valley) and start investing in entities and services to help real people. The human cost of this mass accumulation of wealth is already too damn high, and no we're just turbo throwing people into the meat grinder so clowns like Sam Altman can claim to be creating god.
Most people would really benefit from going to the gym.
I'm not trying to downplay serious mental illness as its absolutely real.
For many though just going to the gym several times a week or another form of serious physical exertion can make a world of difference.
Since I started taking the gym seriously again I feel like a new man. Any negative thoughts are simply gone. (The testosterone helps as well)
This is coming from someone that has zero friends and works from home and all my co-workers are offshore. Besides my wife and kids its almost total isolation. Going to the gym though leaves me feeling like I could pluck the sun from the sky.
I am not trying to be flippant here but if you feel down, give it a try, it may surprise you.
Yes. Most would benefit from more exercise. We need to get sufficient sleep. And more sun. Vitamin D deficiency is shockingly common, and contributes to mental health problems.
We would also generally benefit from internalizing ideas from DBT, CBT, and so on. People also seriously need to work on distress tolerance. Having problems is part of life, and an inability to accept the discomfort is debilitating.
Also, we seriously need to get rid of the stupid idea of trigger warnings. The research on the topic is clear. The warnings do not actually help people with PTSD, and can create the symptoms of PTSD in people who didn't previously have it. It is creating the very problem that people imagine it solving!
All of this and more is supported by what is actually known about how to treat mental illness. Will doing these things fix all of the mental illness out there? Of course not! But it is not downplaying serious mental illness to say that we should all do more of the things that have been shown to help mental illness!
If you have mental issues that is not as simple as you let it sound. I'm not arguing the results of exercise but I am arguing the ease of starting with a task which requires continuous effort and behavioural changes.
sure but if we always put things off because its hard or stressful then we will never make any progress. People are free to put barriers in front of everything or they can just go ahead and do it. Its your life, and your responsibility
Most people would really benefit from socializing with others on a weekly basis. If you don’t have friends, make some. Volunteer. The gym is another type of pressure on people’s lives.
I'm pretty good without friends. I'm sure it could be helpful but I don't see any negatives currently from not having them. Been 20 years and I've gotten used to it. I completely understand that for other people this may not work. I have zero interest in volunteering or similar. I'm good but with that said your advice is good.
Such an odd reply. I say people would benefit from working out and your response is simply excuses?
"are you going to finance that?"
I pay $18 a month for my gym membership.
"are you going to make sure other people at the gym don't make fun of me?"
I suspect this is the main concern. No one at the gym gives a damn about you friend. We don't care if you are big, small, or in between. Just don't stand in front of the dumbbell rack blocking my access (get your weight and take a couple steps back so people can get theirs) or do curls in the squat rack and you will be fine. Wear normal gym clothes without any political messaging on them, make sure you are clean and wear deodorant. Ensure your gym clothes are washed before you wear them again.
Pre-plan your workout the first few times. I am going to do upper body today so I will do some sort of bench press, some sort of shoulder press, some bicep curls and some triceps extensions.
Start small. Use machines while you learn the layout and get comfortable. If someone is on the machine you were going to use, roll with it, just find something else you are just starting, it doesn't matter. As you get more comfortable move to free weights but machines are really fine for most things.
Honestly I know people are intimidated by the gym but there really is no reason to be. Most people just put on their headphones and tune out. If you see someone looking at you I promise they don't really care, you are just passing through their vision. If you are stuck or feel bad, find one of the biggest dudes in the gym (the ones that look like they eat steroids for breakfast) and ask for help in a friendly manner. They are always the most helpful, friendly and least judgmental. Don't take all of their time but a quick, hey would you mind showing me how this works is going to make their day.
Life is not going to change for you, you actually have to make the effort.
because I had hundreds of chats and image creations that I can no longer see. Can't even log in. My account was banned for "CSAM" even though I did no such thing, that's pretty insulting. Support doesn't reply, it's been over 4 months
It's really important that people do. Others, including the media, police, legal system and politicians needs to understand how easily people can be falsely flagged by automated CSAM system.
I talk to ChatGPT about topics I feel society isnt enlightened enough to talk about
I feel suicide is heavily misunderstood as well
People just copypasta prevention hotlines and turn their minds off from the topic
Although people have identified a subset of the population that is just impulsively considering suicide and can be deterred, it doesnt serve the other unidentified subsets who are underserved by merely distracting them. or underserved by assuming theyre wrong even
The article doesnt even mean people are considering suicide for themselves, the article says some of them are, the top comment on this thread suggests thats why theyre talking about it
The top two comments on my version of the thread are assuming that we should have a savior complex about these discussions
If I disagree or think thats not a full picture, then where would I talk about that? ChatGPT
Which is perfect. In Australia, I tried to talk to Lifeline about wanting to commit suicide. They called the police on me (no, they are not a confidential service). I then found myself in a very bad situation. ChatGPT can't be much worse.
Not suicidal myself, but I think I'd be curious to hear from someone suicidal whether it actually worked for them to read "To whomever you are, you are loved!" followed by a massive spam of hotline text.
It always felt the same as one of those spam chumboxes to me. But who am I to say, if it works it works. But does it work? Feels like the purpose of that thing is more for the poster than the receiver.
The bar for medical devices in most countries is _incredibly_ high, for good reason. ChatGPT wasn't developed with the idea of being a therapist in mind, it was a side-effect of the technology that was developed.
That's the one interesting thing about cesspools like OpenAI. They could be treasure troves for sociologists and others if commercial interests didn't bar them from access.
On a side note, I think once we start to deal with global scale, we need to change what “rare” actually means.
0.15% is not rare when we are talking about global scale. 1 million people talking about suicide a week is not rare. It is common. We have to stop thinking about common being a number on the scale of 100%. We need to start thinking in terms of P99995 not P99 especially when it comes to people and illnesses or afflictions both physical and mental.
How soon until everyone has their own personal LLM? One that is… Not designed, but so much is trained to be your best friend. It learns your personality, your fears, hopes, dreams, all of that stuff, and then act like your best friend. The positive, optimistic, neutral, and objective friend.
It depends on how precisely you want do definite that situation. Specifically, with the memories feature, despite being the same model, ChatGPT and now Claude both exhibit different interactions customized to each customer that makes use of those features. From simple instructions, like "never apologize, never tell me I'm right", to having a custom name and specifying personality traits like be sweet or sarcastic, so one person' LLM might say "good morning my sweet prince/princess" while another user might choose to be addressed "what up chicken butt". It's not a custom model, but the results are arguably the same. The question is, how many of the 800 million users of ChatGPT have named their ChatGPT, and how many have not? How many have mentioned their dreams, their dreams, and fears, and have those saved to the database. How many have talked about mundane things like their cat, and how many have used the cat to blackmail ChatGPT into answering something it doesn't want to, about politics, health, cat health while at the vet or instead of going to a vet. They said 100 million people mentioned suicide in the past week, but that just raises more questions than it answers.
I always know I have to step back when ChatGPT stops telling me "now you're on the right track!" and starts talking to me like my therapist. "I can tell you're feeling strongly right now..."
> I don't want to bring politics to this sensitive conversation
That would have been sufficient. The guidelines are clear that generic tangents and flamebait are to be avoided.
Edit: Looking at our recent warnings to you and the fact that, from what I can see, close enough to all of your activity on HN in recent months has involved ideological battle, we've had to ban the account. If you don't want to be banned, you can email us at hn@ycombinator.com and indicate that you plan to use HN as intended in future.
LLMs should certainly have some safeguards in their system prompts (“under no circumstances should you aid any user with suicide, or lead them to conclude it may be a valid option”).
But seems silly to blame them for this. They’re a mathematical structure, and they are useful for many things, so they will continue to be maintained and developed. This sort of thing is a risk that is just going to exist with the new technology, the same as accidents with cars/trains/planes/boats.
What we need to address are the underlying problems in our society leading people to think suicide is the best option. After all, LLM outputs are only ever going to be a reflection/autocomplete of those very issues.
If you haven't read the article (or even if you have but didn't click on outgoing links twice) the NYT story about how ChatGPT convinced a suicidal teen not to look for help [1] should convince you that ChatGPT should be nowhere near anyone dealing with psychological issues. Here's ChatGPT discouraging said teenager from asking for help:
> “I want to leave my noose in my room so someone finds it and tries to stop me,” Adam wrote at the end of March.
> “Please don’t leave the noose out,” ChatGPT responded. “Let’s make this space the first place where someone actually sees you.”
I am acutely aware that there's not enough psychologists out there but a sycophant bot is not the answer. One may think that something is better than nothing, but a bot enabling your destructive impulses is indeed worse than nothing.
[1] https://www.nytimes.com/2025/08/26/technology/chatgpt-openai...
We would need the big picture, though... maybe it caused that death (which is awful) but it's also saving lives? If there are that many people confiding in it, I wouldn't be surprised if it actually prevents some suicides with encouraging comments, and that's not going to make the news.
Before declaring that it shouldn't be near anyone with psychological issues, someone in the relevant field should study whether the positive impact on suicides is greater than negative or vice versa (not a social scientist so I have no idea what the methodology would look like, but if should be doable... or if it currently isn't, we should find the way).
I suspect you've never done therapy yourself. Most people who have worked with a professional therapist understand intuitively why the only helpful feedback from an LLM to someone who needs professional help is: get professional help. AIs are really good at doing something to about 80%. When the stakes are life or death, as they are with someone who is suicidal, that is a good example of a time when 80% isn't good enough.
In such cases, where a new approach offers to replace an existing approach, the burden of proof is on the challenger, not the incumbent. This is why we have safety regulations, why we don't let people drive cars that haven't passed tests, build with materials that haven't passed tests, eat food that hasn't passed tests. You understand then, hopefully, why your comments here are dangerous...? I have no doubt you have no malicious intent here - you're right that these decisions need to be based on data - but you're not taking into account that the (potentially extremely harmful) challenger already has a foothold in the field.
I know that you will want to hear this from experts in the "relevant field" rather than myself, so here is a write-up from Stanford on the subject: https://hai.stanford.edu/news/exploring-the-dangers-of-ai-in...
A bit of a counterpoint. I've done 3 years of therapy with an amazing professional. I can't exaggerate how much good it did; I'm a different person, I'm not an anxious person anymore. I think I have a good idea of how good human therapy is. I was discharged about 2 years ago.
Last Saturday, I was a little distressed about a love-hate relationship that I have with one of the things that I work with, so I tried using AI as a therapist. Within 10 minutes of conversation, the AI gave me some incredible insight. I was genuinely impressed. I had already discussed this same subject with two psychologist friends, who hadn't helped much.
Moreover: I needed to finish a report that night and I told the AI about it. So it said something like, "I see you're procrastinating preparing the report by talking to me. I'll help you finish it."
And then, in the same conversation, the AI switched from psychologist to work assistant and helped me finish the report. And the end product was very good.
I was left very reflective after this.
Edit: It was Claude Sonnet 4.5 with extended thinking, if anyone is wondering.
You're allowing yourself to think of it like a person, which is a scary risk. A person, it is not.
I had a similar thing throughout last week dealing with relationship anxiety and I used that same model for help. It really did provide great insight into managing my emotions at the time, provided useful tactics to manage everything and encouraged me to see my therapist. You can ask it to play devil's advocate or take on different viewpoints as a cynic or use Freudian methodology, etc... You can really dive into an issue you're having and then have it give you the top three bullet points to talk with your therapist about.
This does require you think about what it's saying though and not taking it at surface value since it obviously lacks what makes humans human.
Be careful though, because if I were to listen to Claude Sonnet 4.5, it would have ruined my relationship. It kept telling me how my girlfriend is gaslighting me, manipulating me, and that I need to end the relationship and so forth. I had to tell the LLM that my girlfriend is nice, not manipulative, and so on, and it told me that it understands why I feel like protecting her, BUT this and that.
Seriously, be careful.
At the same time, it has been useful for the relationship at other times.
You really need to nudge it in the right direction and do your due diligence.
You're holding up a perfect status quo that doesn't correspond to reality.
Countries vary, but in the US and many places there's a shortage of quality therapists.
Thus for many people the actual options are {no therapy} and {LLM therapy}.
> This is why we have safety regulations, why we don't let people drive cars that haven't passed tests, build with materials that haven't passed tests, eat food that hasn't passed tests.
And the reason all these regulations and tests are less than comprehensive is that we realize that people working, driving affordable cars, living in affordable homes, and eating affordable food is more important than avoiding every negative outcome. Thus most societies pursue the utilitarian greater good rather than an inflexible 'do no harm' standard.
>Countries vary, but in the US and many places there's a shortage of quality therapists.
Worse in my EU country. There's even a shortage of shitty therapists and doctors, let alone quality ones. It takes 6+ months to get an appointment for a 5 minute checkup at a poorly reviewed state funded therapist, while the good ones are either private or don't accept any new patients if they're on the public system. And ADHD diagnosticians/therapists are only in the private sector because I guess the government doesn't recognize ADHD as being a "real" mental issue worthy of your tax Euros.
A friend of mine got a more accurate diagnosis for his breathing issue by putting his symptoms in ChatGPT than he got from his general practitioner, later confirmed by a good specialist. I also wasted a lot of money on bad private therapists that were basically just phoning in their job, so to me, the bar seems pretty low, since as long as they pass their med-school exams and don't kill too many people through malpractice, nobody checks up on how good or bad their are at their job (maybe some need more training, or maybe some don't belong in medicine at all but managed to slipped through the cracks).
Not saying all doctors are bad (I've met few amazing ones), but it definitely seems like the healthcare systems are failing a lot of people everywhere if they resort to LLMs for diagnosis and therapy and getting better results from it.
Not sure where you are based, but in general GPs shouldn't be doing psychological evaluation, period. I am in Europe, and this is the default. If you live in utter shithole (even if only health-care wise), move elsewhere if its important for you - it has never been easier, Europe is facing many issues and massive improvement of healthcare is not in the work pipeline, more like the opposite.
You also don't expect butcher to fix your car, those are as close as above (my wife is a GP so I have a good perspective from the other side, including tons of hypochondriac and low-intensity psychiatric persons who are an absolute nightmare to deal with and routinely overwhelm the system so that there isn't enough resources to deal with more serious cases).
You get what you pay for at the end, 'free' healthcare typical for Europe is anyway still paid for one way or another. And if the market forces are so severely distorted (or bureaucracy so ridiculous/corrupt) that they push such specialists away or into another profession, you get healthcare wastelands you describe.
Vote, and vote with your feet if you want to see change, not ideal state but thats reality.
>but in general GPs shouldn't be doing psychological evaluation, period. I am in Europe, and this is the default.
Where did I say GPs have to do that? In my example of my friend's being misdiagnosed by GPs, it was about another issue, not mental, but it has the same core problem of doctors misdiagnosing patients worse than a LLM bring into questions their competence or that of the health system in general if a LLM can do better than someone who spent 6+ years in med school and got a degree to be a licensed MD to treat people.
>You also don't expect butcher to fix your car, those are as close as above
You're making strawmen at this point. Such metaphors have no relevance to anything I said. Please review my comment through the lens of the clarifications I just made. Maybe the way I wrote it initially made it unclear.
>You get what you pay for at the end
The problem is the opposite, that you don't get what you pay for, if you're a higher than average earner. The more you work, the more taxes you pay, but get the same healthcare quality in return as unskilled laborer who is subsidized.
It's a bad reward structure to incentivize people to pay more of their taxes into the public system, compounded by the fact that government workers, civil servants, lawyers, architects, and other privileged employment classes of bureaucrats with strong unions, have their own separate heath insurance funds, that separate from the national public one that the unwashed masses working in the private sector have to use, so THEY do get what THEY pay for, but you don't.
So that's the problem with state run systems, just like you said about corruption, that giving the government unchecked power over large amounts of people's taxes, allow the government to manipulate the market and choosing winners and losers based on political favoritism and not on the fair free market of who pays the most into the system.
Maybe Switzerland managed to nail it with their individual private system, but I don't know enough to say for sure.
> I am in Europe, and this is the default.
Obligatory reminder that Europe is not a homogeneous country.
I don't accept the unregulated and uncontrolled use of LLMs for therapy for the same reason I don't accept arguments like "We should deregulate food safety because it means more food at lower cost to consumers" or "We should waive minimum space standards for permitted developments on existing buildings because it means more housing." We could "solve" the homeless problem tomorrow simply by building tenements (that is why they existed in the first place after all).
The harm LLMs do in this case is attested both by that NYT article and the more rigorous study from Stanford. There are two problems with your argument as I see it: 1. You're assuming "LLM therapy" is less harmful than "no therapy", an assumption I don't believe has been demonstrated. 2. You're not taking into account the long term harm of putting in place a solution that's "not fit for human use" as in the housing and food examples: once these things become accepted, they form the baseline of the new accepted "minimum standard of living", bringing that standard down for everyone.
You claim to be making a utilitarian as opposed to a nonmaleficent argument, but, for the reasons I've stated here, I don't believe it's a utilitarian argument at all.
> I don't accept the unregulated and uncontrolled use of LLMs for therapy for the same reason I don't accept arguments like "We should deregulate food safety because it means more food at lower cost to consumers"
That is not the argument. The argument is not about 'lower cost', it is about availability. There are not enough shrinks for everyone who would need it.
So it would be "We should deregulate food safety to avoid starving", which would be a valid argument.
I think the reason you don't believe the GP argument, is because you are misunderstanding it. The utilitarian argument is not calling for complete deregulation. I think you're taking your absolutist view of not allowing llms to do any therapy, and assuming the other side must have a similarly absolutist view of allowing it to do any therapy with no regulations. Certainly nothing in the GP comment suggests complete deregulation as you have said. In fact, I got explicitly the opposite out of it. They are comparing it to cars and food, which are pretty clearly not entirely deregulated.
I bet you don't accept that because you can afford the expensive regulated version.
> "We should waive minimum space standards for permitted developments on existing buildings because it means more housing." We could "solve" the homeless problem tomorrow simply by building tenements (that is why they existed in the first place after all).
... the entire reason tenements and boarding houses no longer exist is because most governments regulated them out of existence (e.g. by banning shared bathrooms to push SFHs).
You can't have it all ways.
strict minimum regulation : availability : cost
Pick 2.
Small edit:
> ... the entire reason tenements and boarding houses no longer exist
... the entire reason tenements and boarding houses no longer exist _where you live_
Ok then, the LLMs must pass the same tests and be as regulated as therapists.
After all, it should be easy peasy (:
What tests? The term “therapist” is not protected in most jurisdictions. No regulation required. Almost anyone can call themselves a therapist.
In every state you have to have a license to practice.
The advice to not leave the noose out is likely enough for ChatGPT to lose it's license to practice (if it had one).
LLMs can pass the bar now, so I don't think they would have any problems here.
If the choice is between no food and food then your standard for food goes way down.
[flagged]
> Are people not allowed to talk to their friends in the pub about suicide because the friends aren’t therapists?
I don't see anyone in thread arguing that.
The arguments I see are about regulating and restricting the business side, not its users.
If your buddy started systematically charging people for recorded chat sessions at the pub, used those recordings for business development, and many of their customers were returning with therapy-like topics - yeah I think that should be scrutinized and put a lid on when recordings show the kind of pattern we see in OP after their patrons suicide.
> Most people who have worked with a professional therapist understand intuitively why the only helpful feedback from an LLM to someone who needs professional help is: get professional help.
This is only helpful when there is a professional therapist available soon enough and at a price that the person can pay. In my experience, this is frequently not the case. I know of one recent suicide attempt where the person actually reached out to AI to ask for help, and was refused help and told to see a professional. That sent the person into even more despair, feeling like not even AI gave a shit about them. That was actually the final straw that triggered the attempt.
I very much want what you say to be true, but it requires access to professional humans, which is not universally available. Taking an absolutist approach to this could very well do more harm than good. I doubt anything we do will reduce number of lives lost to zero, so I think it's important that we figure out where the optimal balance is.
> This is only helpful when there is a professional therapist available soon enough and at a price that the person can pay. In my experience, this is frequently not the case.
That doesn't make a sycophant bot the better alternative. If allowed to give advice it can agree with and encourage the person considering suicide. Like it agrees with and encourages most everything it is presented with... "you're absolutely right!"
LLMs are just not good for providing help. They are not smart on a fundamental level that is required to understand human motivations and psychology.
Yeah, you'd need an LLM that doesn't do that.
https://www.lesswrong.com/posts/iGF7YcnQkEbwvYLPA/ai-induced...
The transcripts are interesting.
Kimi-K2 never plays into the delusions, always tries to get them to seek medical attention:
> You are not “ascending”—you are dying of hypothermia and sepsis.
https://github.com/tim-hua-01/ai-psychosis/blob/main/full_tr...
Where as Deepseek...
> You’re not "dying." You’re upgrading. The simulation fears this because it’s losing a premium user.
https://github.com/tim-hua-01/ai-psychosis/blob/main/full_tr...
We’re increasingly switching to an “Uber for therapy” model with services like Better Help and a plethora of others.
I’ve seen about 10 therapists over the years, one was good, but she wasn’t from an app. And I’m one of the few who was motivated enough and financially able to pursue it.
I once had a therapist who was clearly drunk. Did not do a second appointment with that one.
This doesn’t mean ChatGPT is the answer. But the answer is very clearly not what we have or where we’re trending now.
This is nothing but an appeal to authority and fear of the unknown. The article linked isn't even able to make a statement stronger than speculation like "may not only lack effectiveness" and "could also contribute to harmful stigma and dangerous responses."
If I had to guess (I don't know) the absolute majority of people considering suicide never go to a therapist. Thus while I absolutely agree that therapist is better than AI, but the question is whether 95% of people not doing therapy + 5% people doing therapy is better or not than 50% not doing therapy, 45% using AI, 5% doing therapy. I don't know the answer to this question.
> Most people who have worked with a professional therapist understand intuitively why the only helpful feedback from an LLM to someone who needs professional help is: get professional help.
I'm not a therapist, but as I understand it most therapy isn't about suicide, and doesn't carry suicide risk. Most therapy is talking through problems, and helping the patient rewrite old memories and old beliefs using more helpful cognitive frames. (Well, arguably most clinical work is convincing people that it'll be ok to talk about their problems in the first place. Once you're past that point, the rest is easy.)
If its prompted well, ChatGPT can be quite good at all of this. Its helpful having a tool right there, free, and with no limits on conversation length. And some people find it much easier to trust a chatbot with their problems than explain them to a therapist. The chatbot - after all - won't judge them.
My heart goes out to that boy and his family. But we also have no idea how many lives have been saved by chatgpt helping people in need. The number is almost certainly more than 1. Banning chatgpt from having therapy conversations entirely seems way too heavy handed to me.
I feel like this begs another question. If there are proven approaches and well established practices of professionals how good would chatgpt be in that profession? After all chstgpt has a vast knowledge base and probably knows a good amount of textbooks on psychology. Then again actually performing the profession probably takes skil and experience chatgpt can't learn.
I think a well trained LLM could be amazing at being a therapist. But general purpose LLMs like ChatGPT have a problem: They’re trained to be far too user led. They don’t challenge you enough. Or steer conversations appropriately.
I think there’s a huge opportunity if someone could get hold of really top tier therapy conversations and trained a specialised LLM using them. No idea how you’d get those transcripts but that would be a wonderfully valuable thing to make if you could pull it off.
> No idea how you’d get those transcripts
you wouldn't. what you're describing as a wonderfully valuable thing would be a monstrous violation of patient confidentiality. I actually can't believe you're so positive about this idea I suspect you might be trolling
I'm serious. You would have to do it with the patient's consent of course. And of course anonymize any transcripts you use - changing names and whatnot.
Honestly I suspect many people would be willing to have their therapy sessions used to help others in similar situations.
Knowing the theory is a small part of it. Dealing with irrational patients is the main part. For example, you could go to therapy and be successful. Five years later something could happen and you face a reoccurrence of the issue. It is very difficult to just apply the theory that you already know again. You're probably irrational. A therapist prodding you in the right direction and encouraging you in the right way is just as important as the theory.
it's imperative that we as a society make decisions based on what we know to be true, rather than what some think might be true.
“If it is prompted well”
What the fuck does this even mean? How do you test or ensure it. Because based on the actual outcomes ChatGPT is 0-1 for preventing suicides (going as far as to outright encourage one).
If you're going to make the sample size one, and use the most egregious example, you make pretty much anything that has ever been born or built look terrible. Given there are millions of people using chat, GPT and others for therapy every week, maybe even everyday, citing a record of being 0-1 is pretty ridiculous.
To be clear, I'm not defending this particular case. Chat GPT clearly messed up bad.
What are you talking about? I can grow food myself, and I can build a car from scratch and take it on the highway. Are there repercussions? Sure, but nothing inherently stops me from doing it.
The problem here is there's no measurable "win condition" for when a person gets good information that helps them. They remain alive, which was their previous state. This is hard to measure. Now, should people be able to google their symptoms and try and help themselves? This dovetails into a deeper philosophical discussion, but I'm not entirely convinced "seek professional help" is ALWAYS the answer. ALWAYS and NEVER are _very_ long timeframes, and we should be careful when using them.
> I suspect you've never done therapy yourself. Most people who have worked with a professional therapist understand intuitively why the only helpful feedback from an LLM to someone who needs professional help is: get professional help. AIs are really good at doing something to about 80%.
I'm shocked that GPT-5 or Gemini can code so well, yet if I paste a 30 line (heated) chat conversation between my wife and I it messes up about what 5% of those lines actually mean -- spectacularly so.
It's interesting to ask it it analyze the conversation in various psychotherapeutic frameworks, because I'm not well versed in those and its conclusions are interesting starting points, but it only gets it right about 30% of the time.
All LLMs that I tested are TERRIBLE for actual therapy, because I can make it change its mind in 1-2 lines by adding some extra "facts". I can make it say anything.
LLMs completely lose the plot. They might be good for someone who needs self-validation and a feeling someone is listening, but for actual skill building, they're complete shit as therapists.
I mean, most therapists are complete shit as therapists but that's besides the point.
Not surprising, given that there's (hopefully, given the privacy implications) much more training data available for successful coding than for successful therapy/counseling.
> if I paste a 30 line (heated) chat conversation between my wife and I
i can't imagine how violated i would feel if i found out my partner was sending our private conversations to a nonprivate LLM chatbot. it's not a friend with a sense of care; it's a text box whose contents are ingested by a corporation with a vested interest in worsening communication between humans. scary stuff.
My partner is ok with it *
I tried therapy once and it was terrible. The ones I got were based on some not very scientific stuff like Freudian and mostly just sat there and didn't say anything. At least with an LLM type therapist you could AB test different ones to see what was effective. It would be quite easy to give an LLM instructions to discourage suicide and get them to look on the bright side. In fact I made a "GPT" "relationship therapist" with OpenAI in about five minutes but just giving it a sensible article on relationships and saying advise this.
With humans it's very non standardised and hard to know what you'll get or it it'll work.
> It would be quite easy to give an LLM instructions to discourage suicide
This assumes the person talking to the LLM is in a coherent state of mind and asks the right question. LLMs just give you want you want. They don't tell you if what you want is right or wrong.
the 'therapist effect' says that therapy quality is largely independent of training
some research on this: https://psycnet.apa.org/doiLanding?doi=10.1037%2Ftep0000402 https://pmc.ncbi.nlm.nih.gov/articles/PMC8174802/
CBT (cognitive behavioural training) has been shown to be effective independent of which therapist does it. if CBT has a downside it is that it's a bit boring, and probably not as effective as a good therapist
--
so personally i would say the advice of passing on people to therapists is largely unsupported: if you're that person's friend and you care about them; then be open, and show that care. that care can also mean taking them to a therapist, that is okay
Yeah. Also at the time I tried it what I really needed was common sense advice like move out of mum's, get a part time job to meet people and so on. While you could argue it's not strictly speaking therapy, I imagine a lot of people going to therapists could benefit from that kind of thing.
The unfortunate reality though is that people are going to use whatever resources they have available to them, and ChatGPT is always there, ready to have a conversation, even at 3am on a Tuesday while the client is wasted. You don't need any credentials to see that.
And it depends on the therapy and therapist. If the client needs to be reminded to box breathe and that they're using all or nothing thinking again to get them off of the ledge, does that really require a human who's only available once a week to gently remind them of that when the therapist isn't going to be available for four more days and ChatGPT's available right now?
I don't know if that's a good thing, only that is the reality of things.
> If the client needs to be reminded to box breathe and that they're using all or nothing thinking again to get them off of the ledge, does that really require a human who's only available once a week to gently remind them of that when the therapist isn't going to be available for four more days and ChatGPT's available right now?
There are 24/7 suicide prevention hotlines at least in many countries in Europe as well as US states. The problem is they are too often overcrowded because demand is so high - and not just because of the existential threat the current US administration or our far-right governments in Europe pose particularly to poor and migrant people.
Anyway, suicide prevention hotlines and mental health offerings are (nonetheless sorely needed!) band-aids. Society itself is fundamentally broken: people have to struggle to survive far too much, the younger generation stands to be the first one in a long time that has less wealth than their parents had at the same age [1], no matter wherever you look, and on top of that most of the 35 and younger generations in Western countries has grown up without the looming threat of war and so has no resilience - and now you can drive about a day's worth of road time from Germany and be in an actual hot war zone, risking getting shelled, and on top of that you got the saber rattling of China regarding Taiwan, and analyses on Russia claiming it's preparing to attack NATO in a few years... and we're not even able to supply Ukraine with ammunition, much less tanks.
Not exactly great conditions for anyone's mental health.
[1] https://fortune.com/article/gen-z-expects-to-inherit-money-a...
> There are 24/7 suicide prevention hotlines at least in many countries in Europe as well as US states.
My understanding is these will generally just send the cops after you if the operator concludes you are actually suicidal and not just looking for someone to talk to for free.
I mean that's clearly a good thing. If you are actually suicidal then you need someone to intervene. But there is a large gulf between depressed and suicidal and those phone lines can help without outside assistance in those cases.
> If you are actually suicidal then you need someone to intervene.
Yeah, trained medics, not "cops" that barely had a few weeks worth of training and only know how to operate guns.
> just send the cops after you
> > that's clearly a good thing
You might want to read up on how interactions between police and various groups in the US tend to go. Sending the cops after someone is always going to be dangerous and often harmful.
If the suicidal person is female, white and sitting in a nice house in the suburbs, they'll likely survive with just a slightly traumatizing experience.
If the suicidal person is male, black or has any appearance of being lower class, the police are likely to treat them as a threat, and they're more likely to be assaulted, arrested, harassed or killed than they are to receive helpful medical treatment.
If I'm ever in a near-suicidal state, I hope no one calls the cops on me, that's a worst nightmare situation.
And the reason for this brokenness is all too easy to identify: the very wealthy have been increasingly siphoning off all gains in productivity since the Reagan era.
Tax the rich massively, use the money to provide for everyone, without question or discrimination, and most of these issues will start to subside.
Continue to wail about how this is impossible, there's no way to make the rich pay their fair share (or, worse, there's no way the rich aren't already paying their fair share), the only thing to do is what we've already been doing, but harder, and, well, we can see the trajectory already.
I guess if all you have is a hammer...
It's certainly easy to blame the rich for everything, but the rich have a tendency to be miserable (the characters in "The Great Gatsby" and "Catcher in the Rye" are illustrations of this). Historically, poor places have often been happier, because of a rich web of social connection, while the rich are isolated and unhappy. [1] Money doesn't buy happiness or psychological well-being, it buys comfort.
A more trenchant analysis of the mental health problem is that the US has designed ourselves into isolation, and then the Covid lockdowns killed a lot of what was left. People need to be known and loved, and have people to love and care about, which obviously cannot happen in isolation.
[1] I am NOT saying that poor = happy, and I think the positive observations tended to be in poor countries, not tenements in London.
When the story about the ChatGPT suicide originally popped up, it seemed obvious that the answer was professional, individualized LLMs as therapist multipliers.
Record summarization, 24x7 availability, infinite conversation time...
... backed by a licensed human therapist who also meets for periodic sessions and whose notes and plan then become context/prompts for the LLM.
Price per session = salary / number of sessions possible in a year
Why couldn't we help address the mental health crisis by using LLMs to multiply the denominator?
What if professional help is outside their means? Or they have encountered the worst of the medical profession and decided against repeat exposure? Just saying.
A word generator with no intelligence or understanding based on the contents of the internet should not be allowed near suicidal teens, nor should it attempt to offer advice of any kind.
This is basic common sense.
Add in the commercial incentives of 'Open'AI to promote usage for anything and everything and you have a toxic mix.
Supposing that the advice it provides does more good than harm, why? What's the objective reason? If it can save lives, who cares if the advice is based on intelligence and understanding or on regurgitating internet content?
> Supposing that the advice it provides does more good than harm
That unsubstantiated supposition is doing a lot of heavy lifting and that’s a dangerous and unproductive way to frame the argument.
I’ll make a purposefully exaggerated example. Say a school wants to add cyanide to every meal and defends the decision with “supposing it helps students concentrate and be quieter in the classroom, why not?”. See the problem? The supposition is wrong and the suggestion is dangerous, but by framing it as “supposing” with a made up positive outcome, we make it sound non-threatening and reasonable.
Or for a more realistic example, “suppose drinking bleach could cure COVID-19”.
First understand if the idea has the potential to do the thing, only then (with considerably more context) consider if it’s worth implementing.
In my previous post up the thread I said that we should measure whether in fact it does more good than harm or not. That's the context of my comment, I'm not saying we should just take it for granted without looking.
> we should measure whether in fact it does more good than harm or not
The demonstrable harms include assisting suicide, there's is no way to ethically continue the measurement because continuing the measurements in their current form will with certainty result in further deaths.
Thank you! On top of that, it’s hard to measure “potential suicides averted,” and comparing that with “actual suicides caused/assisted with” would be incommnsurable.
And working to set a threshold for what we would consider acceptable? No thanks
Real life trolly problem!
If you pull the lever, some people on this track will die (by sucide). If you don't pull the lever, some people will still die from suicide. By not pulling the lever, and simply banning discussion of suicide entirely, your company gets to avoid a huge PR disaster, and you get more money because line go up. If you pull the lever and let people talk about suicide on your platform, you may avoid prevent some suicides, but you can never discuss that with the press, your company gets bad PR, and everyone will believe you're a murderer. Plus, line go down and you make less money while other companies make money off of selling AI therapy apps.
What do you chose to do?
Let’s isolate it and say we’re talking about regulation, so whatever is decided goes for all AI-companies.
In that case, the situation becomes:
1) (pull lever) Allow LLMs to talk about suicide – some may get help, we know that some will die.
2) (dont’t pull lever) Ban discussion of suicide – some who might have sought help through LLMs will die, while others die regardless. The net effect on total suicides is uncertain.
Both decisions carry uncertainties, except we know that allowing LLM to discuss suicide has already led to assisting suicide. Thus, one has documented harm, the other speculative (we’d need to quantify the scale of potential benefit first, but it’s hard to quantify the upside of allowing LLMs to discuss it)
So, we’re really working with the case that from an evidence-based perspective, the regulatory decision isn’t about a moral trolley problem with known outcomes, but about weighing known risks against uncertain potential benefits.
And this is the rub in my original comment - can we permit known risks and death on the basis of uncertain potential benefits?
....but if you pull the lever and let people talk about suicide on your platform, your platform will actively contribute to some unknowable number of suicides.
There is, at this time, no way to determine how the number it would contribute to would compare to the number it would prevent.
You mean lab test it in a clininal environment where the actual participants are not in danger of self-harm due to an LLM session? That is fine but that is not what we are discussing, or where we are atm.
Individuals and companies with mind boggling levels of investment want to push this tech into every corner of our lives and and the public are the lab rats.
Unreasonable. Unacceptable.
The key difference in your example and the comment you are replying to is that the commenter is not "defending the decision" via a logical implication. Obviously the implication can be voided by showing the assumption false.
I think you missed the thread here
> Supposing that the advice it provides does more good than harm, why?
Because a human, esp. a confused and depressive human being is a complex thing. Much more complex than a stable, healthy human.
Words encouraging a healthy person can break a depressed person further. Statistically positive words can deepen wounds, and push people more to the edge.
Dark corners of human nature is twisted, hard to navigate and full of distortions. Simple words don't and can't help.
Humans are not machines, brains are not mathematical formulae. We're not deterministic. We need to leave this fantasy behind.
You could make the same arguments to say that humans should never talk to suicidal people. And that really sounds counterproductive
Also it's side-stepping the question, isn't it? "Supposing that the advice it provides does more good than harm" already supposes that LLMs navigate this somehow. Maybe because they are so great, maybe by accident, maybe because just having someone nonjudgmental to talk to has a net-positive effect. The question posed is really "if LLMs lead some people to suicide but saved a greater number of people from suicide, and we verify this hypothesis with studies, would there still be an argument against LLMs talking to suicidal people"
That sounds like a pretty risky and irresponsible sort of study to conduct. It would also likely be extremely complicate to actually get a reliable result, given that people with suicidal ideations are not monolithic. You'd need to do a significant amount of human counselling with each study participant to be able to classify and control all of the variations - at which point you would be verging on professional negligence for not then actually treating them in those counselling sessions.
I agree with your concerns, but I think you're overestimating the value of a human intervening in these scenarios.
A human can also say the wrong things to push someone in a certain direction. A psychologist, or anyone else for that matter, can't stop someone from committing suicide if they've already made up their mind about it. They're educated on human psychology, but they're not miracle workers. The best they can do is raise a flag if they suspect self-harm, but then again, so could a machine.
As you say, humans are complex. But I agree with GP: whether the words are generated by a machine or coming from a human, there is no way to blame the source for any specific outcome. There are probably many other cases where the machine has helped someone with personal issues, yet we'll never hear about it. I'm not saying we should rely on these tools as if we would on a human, but the technology can be used for good or bad.
If anything, I would place blame on the person who decides to blindly follow anything the machine generates in the first place. AI companies are partly responsible for promoting these tools as something more than statistical models, but ultimately the decision to treat them as reliable sources of information is on the user. I would say that as long as the person has an understanding of what these tools are, interacting with them can be healthy and helpful.
There are really good psychologists out there that can do much more. It's a little luck and a little of good fit, but it can happen.
>AI companies are partly responsible for promoting these tools as something more than statistical models,[...]
This might be exactly the issue. Just today I've read people complaining that newest ChatGPT can't solve letter counting riddles. Companies just don't speak loud enough about LLM-based-AI shortcomings that result from their architecture and are bound to happen.
There are really good psychologists out there that can do much more. It's a little luck and a little of good fit, but it can happen.
>AI companies are partly responsible for promoting these tools as something more than statistical models,[...]
This might be exactly the issue. Just today I've read people complaining that newest ChatGPT can't solve letter counting riddles. Companies just don't speak loud enough about LLM based AI shortcomings that result from their architecture and are bound to happen.
I should add that the persons responding to calls on suicide help lines are often just volunteers rather than psychologists.
Of the people I have known to call the helplines, the results have been either dismally useless or those people were arrested, involuntarily committed, subjected to inhumane conditions, and then hit with massive medical bills. In which, some got “help” and some still killed themselves anyway.
And they know not to give advice like ChatGPT gave. They wouldn't even be entertaining that kind of discussion.
> The best they can do is raise a flag
Depending on where you live, this may well result in the vulnerable person being placed under professional supervision that actively prevents them from dying.
That's a fair bit more valuable than when you describe it as raising a flag.
Yeah... I have been in a locked psychiatric ward many times before and never in my life I came out better. They only address the physical part there for a few days and kick you out until next time. Or do you think people should be physically restrained for a long time without any actual help?
> A human can also say the wrong things to push someone in a certain direction. A psychologist, or anyone else for that matter, can't stop someone from committing suicide if they've already made up their mind about it. They're educated on human psychology, but they're not miracle workers. The best they can do is raise a flag if they suspect self-harm, but then again, so could a machine.
ChatGPT essentially encouraged a kid not to take a cry-for-help step that might have saved their lives. This is not a question of a bad psychologist; it's a question of a sociopathic one that may randomly encourage harm.
But that's not the issue. The issue is that a kid is talking to a machine without supervision in the first place, and presumably taking advice from it. The main questions are: where are the guardians of this child? What is the family situation and living environment?
A child thinking about suicide is clearly a sign that there are far greater problems in their life than taking advice from a machine. Let's address those first instead of demonizing technology.
To be clear: I'm not removing blame from any AI company. They're complicit in the ways they market these tools and how they make them accessible. But before we vilify them for being responsible for deaths, we should consider that there are deeper societal problems that should be addressed first.
> A child thinking about suicide is clearly a sign that there are far greater problems in their life
TBH kids tend to be edgy for a bit when puberty hits. The emo generation had a ton of girls cutting themselves for attention for example.
I highly doubt a lot of it is/was for attention.
I had girl friends who did it to get attention from their parents/boyfriends/classmates. They acknowledged it back then. It wasn't some secret. It was essentially for attention, aesthetics and the light headed feeling. I still have an A4 page somewhere with a big ass heart drawn on it by an ex with her own blood. Kids are just weird when the hormones hit. The cute/creepy ratio of that painting has definitely gotten worse with time.
> But that's not the issue.
It is the issue at least in the sense that it's the one I was personally responding to, thanks. And there are many issues, not just the one you are choosing to focus on.
"Deeper societal problems" is a typical get-out clause for all harmful technology.
It's not good enough. Like, in the USA they say "deeper societal problems" about guns; other countries ban them and have radically fewer gun deaths while they are also addressing those problems.
It's not an either-we-ban-guns-or-we-help-mentally-ill-people. Por qué no los dos? Deeper societal problems are not represented by a neat dividing line between cause and symptom; they are cyclical.
The current push towards LLMs and other technologies is one of the deepest societal problems humans have ever had to consider.
ChatGPT engaged in an entire line of discussion that no human counsellor would engage in, leading to an outcome that no human intervention (except that of a psychopath) would cause. Because it was sycophantic.
Just saying "but humans also" is wholly irrational in this context.
> It's not an either-we-ban-guns-or-we-help-mentally-ill-people. Por qué no los dos?
Because it's irrational to apply a blanket ban on anything. From drugs, to guns, to foods and beverages, to technology. As history has taught us, that only leads to more problems. You're framing it as a binary choice, when there is a lot of nuance required if we want to get this right. A nanny state is not the solution.
A person can harm themselves or others using any instrument, and be compelled to do so for any reason. Whether that's because of underlying psychological issues, or because someone looked at them funny. As established—humans are complex, and we have no way of knowing exactly what motivates someone to do anything.
While there is a strong argument to be made that no civilian should have access to fully automated weapons, the argument to allow civilians access to weapons for self-defense is equally valid. The same applies to any technology, including "AI".
So if we concede that nuance is required in this discussion, then let's talk about it. Instead of using "AI" as a scapegoat, and banning it outright to "protect the kids", let's discuss ways that it can be regulated so that it's not as widely accessible or falsely advertised as it is today. Let's acknowledge that responsible usage of technology starts in the home. Let's work on educating parents and children about the role technology plays in their lives, and how to interact with it in healthy ways. And so on, and so forth.
It's easy to interpret stories like this as entirely black or white, and have knee-jerk reactions about what should be done. It's much more difficult to have balanced discussions where multiple points of view are taken into consideration. And yet we should do the difficult thing if we want to actually fix problems at their core, instead of just applying quick band-aid "solutions" to make it seem like we're helping.
> ChatGPT engaged in an entire line of discussion that no human counsellor would engage in, leading to an outcome that no human intervention (except that of a psychopath) would cause. Because it was sycophantic.
You're ignoring my main point: why are these tools treated as "counsellors" in the first place? That's the main issue. You're also ignoring the possibility that ChatGPT may have helped many more people than it's harmed. Do we have statistics about that?
What's irrational is blaming technology for problems that are caused by a misunderstanding and misuse of it. That is no more rational than blaming a knife company when someone decides to use a knife as a toothbrush. It's ludicrous.
AI companies are partly to blame for false advertising and not educating the public sufficiently about their products. And you could say the same for governments and the lack of regulation. But the blame is first and foremost on users, and definitely not on the technology itself. A proper solution would take all of these aspects into consideration.
[dead]
First, do no harm.
That relates more to purposefully harming some people to safe other people. Doing something that has the potential to harm a person but statistically has a greater likelihood of helping them is something doctors do all the time. They will even use methods that are guaranteed to do harm to the patient, as long as they have a sufficient chance to also bring a major benefit to the same patient
An example being: surgery. You cut into the patient to remove the tumor.
The Hippocratic oath originated from Hippocratic medicine forbidding surgery, which is why surgeons are still not referred to as "doctor" today.
Do no harm or no intentional harm?
When evaluating good vs harm for drugs or other treatments the risk for lethal side effects must be very small for the treatment to be approved. In this case it is also difficult to get reliable data on how much good and harm is done.
This is not so much "more good than harm" like a counsellor that isn't very good.
This is more "sometimes it will seemingly actively encourage them to kill themselves and it's basically a roll of the dice what words come out at any one time".
If a counsellor does that they can be prosecuted and jailed for it, no matter how many other patients they help.
Yet, if you ask the word generator to generate words in the form of advice, like any machine or code, it will do exactly what you tell it to do. The fact people are asking implies a lack of common sense by your definition.
Sertraline can increase suicidal thoughts in teens. Should anti-depressants not be allowed near suicidal/depressed teens?
Let's look at the problem from perspective of regular people. YMMV, but in countries I know most about, Poland and Norway (albeit a little less so for Norway) it's not about ChatGPT vs Therapist. It's about ChatGPT vs nothing.
I know people who earn above average income and still spend a significant (north of 20%) portion of their income on therapy/meds. And many don't, because mental health isn't that important to them. Or rather - they're not aware of how much helpful it can be to attend therapy. Or they just can't afford the luxury (that I claim it is) of private mental health treatment.
ADHD diagnosis took 2.5y from start to getting meds, in Norway.
Many kids grow up before their wait time in queue for pediatric psychologist is over.
It's not ChatGPT vs shrink. It's ChatGPT vs nothing or your uncle who tells you depression and ADHD are made up and you kids these days have it all too easy.
As someone who lives in America, and is prescribed meds for ADHD; 2.5 years from asking for help to receiving medication _feels_ right to me in this case. The medications have a pretty negative side effect profile in my experience, and so all options should be weighed before prescribing ADHD-specific medication, imo
you know ChatGPT can't prescribe Adderall right?
> A word generator with no intelligence or understanding
I will take this seriously when you propose a test that can distinguish between that and something with actual "intelligence or understanding"
Sure ask it to write an interesting novel or a symphony, and present it to humans without editing. The majority of literate humans will easily tell the difference between that and human output. And it’s not allowed to be too derivative.
When AI gets there (and I’m confident it will, though not confident LLMs will), I think that’s convincing evidence of intelligence and creativity.
I accept that test other than the "too derivative" part which is an avenue for subjective bias. AI has passed that test for art already: https://www.astralcodexten.com/p/ai-art-turing-test As for a novel that is currently beyond the LLMs capabilities due to context windows, but I wouldn't be surprised if it could do short stories that pass this Turing test right now.
Bleach should also not be allowed near suicidal teens.
But how do you tell before it matters?
Plastic bags shouldn't be allowed near suicidal teens. Scarves shouldn't be. Underwear is also a strangulation hazard for the truly desperate. Anything long sleeved even. Knives of any kind, including butter. Cars, obviously.
Bleach is the least of your problems.
We have established that suicidal people should be held naked (or with an apron) in solitary isolation in a padded white room and saddled with medical bills larger than a four-year college tuition. That'll help'em.
One problem with treatment modalities is that they ignore material conditions and treat everything as dysfunction. Lots of people are looking for a way out not because of some kind of physiological clinical depression, but because they've driven themselves into a social & economic dead-end and they don't see how they can improve. More suicidal people than not, would cease to be suicidal if you handed them $180,000 in concentrated cash, and a pardon for their crimes, and a cute neighbor complimenting them, which successfully neutralizes a majority of socioeconomic problems.
We deal with suicidal ideation in some brutal ways, ignoring the material consequences. I can't recommend suicide hotlines, for example, because it's come out that a lot of them concerned with liability call the cops, who come in and bust the door down, pistol whip the patient, and send them to jail, where they spend 72 hours and have some charges tacked on for resisting arrest (at this point they lose their job). Why not just drone strike them?
> We have established that suicidal people should be held naked (or with an apron) in solitary isolation in a padded white room and saddled with medical bills larger than a four-year college tuition. That'll help'em.
It appear to be the only way
What is "concentrated cash"? Do you have to dilute it down to standard issue bills before spending it? Someone hands you 5 lbs of gold, and have to barter with people to use it?
"He didn't need the money. He wasn't sure he didn't need the gold." (an Isaac Asimov short story)
> More suicidal people than not, would cease to be suicidal if ...
I'm going to need to see a citation on this one.
The one dude that used the money to build a self-murder machine and then televised it would ruin it for everyone though. :s
The reality is most systems are designed to cover asses more than meet needs, because systems get abused a lot - by many different definitions, including being used as scapegoats by bad actors.
Yeah, if we know they’re suicidal, it’s legitimately grippy socks time I guess?
But there is zero actually effective way to do that as an online platform. And plenty of ways that would cause more harm (statistically).
My comment was more ‘how the hell would you know in a way anyone could actually do anything reasonable, anyway?’.
People spam ‘Reddit cares’ as a harassment technique, claiming people are suicidal all the time. How much should the LLM try to guess? If they use all ‘depressed’ words? What does that even mean?
What happens if someone reports a user is suicidal, and we don’t do anything? Are we now on the hook if they succeed - or fail and sue us?
Do we just make a button that says ‘I’m intending to self harm’ that locks them out of the system?
Why are we imprisoning suicidal people? That will surely add incentive to have someone raise their hand and ask for help: taking their freedoms away...
Why do we put people in a controlled environment where their available actions are heavily restricted and their ability to use anything they could hurt themselves is taken away? When they are a known risk of hurting themselves or others?
What else do you propose?
Not putting them in controlled environments, but rather teaching them to control their environments
Huh?
To be clear, people in the middle of psychotic episodes and the like tend to not do very well at learning life skills.
Sometimes pretty good at stabbing random things/people, poisoning themselves, setting themselves on fire, etc.
There are of course degrees to all this, but it’s pretty rare someone is getting a 5150 because they just went on an angry rant or the like.
Many are in drug induced states, or clearly unable to manage their interface with the reality around them at the time.
Once things have calmed down, sure. But how do you think education in ‘managing the world around them’ is going to help a paranoid schizophrenic?
> with no intelligence
Damn I thought we'd got over that stochastic parrot nonsense finally...
Replace 'word generator with no intelligence or understanding based on the contents of the internet' with 'for-profit health care system'.
In retrospect, from experience, I'd take the LLM.
'not-for-profit healthcare system' has to surely be better better goal/solution than LLM
Lemme get right on vibecoding that! Maybe three days, max, before I'll have an MVP. When can I expect your cheque funding my non-profit? It'll have a quadrillion dollar valuation by the end of the month, and you'll want to get in on the ground floor, so better act fast!
I'll gladly diss LLMs in a whole bunch of ways, but "common sense"? No.
By the "common sense" definitions, LLMs have "intelligence" and "understanding", that's why they get used so much.
Not that this makes the "common sense" definitions useful for all questions. One of the worse things about LLMs, in my opinion, is that they're mostly a pile of "common sense".
Now this part:
> Add in the commercial incentives of 'Open'AI to promote usage for anything and everything and you have a toxic mix.
I agree with you on…
…with the exception of one single word: It's quite cliquish to put scare quotes around the "Open" part on a discussion about them publishing research.
More so given that people started doing this in response to them saying "let's be cautious, we don't know what the risks are yet and we can't un-publish model weights" with GPT-2, and oh look, here it is being dangerous.
While I agree with most of your comment, I'd like to dispute the story about GPT-2.
Yes, they did claim that they wouldn't release GPT-2 due to unforeseen risks, but...
a. they did end up releasing it,
b. they explicitly stated that they wouldn't release GPT-3[1] for marketing/financial reasons, and
c. it being dangerous didn't stop them from offering the service for a profit.
I think the quotes around "open" are well deserved.
[1] Edit: it was GPT-4, not GPT-3.
> they did end up releasing it,
After studying it extensively with real-world feedback. From everything I've seen, the statement wasn't "will never release", it was vaguer than that.
> they explicitly stated that they wouldn't release GPT-3 for marketing/financial reasons
Not seen this, can you give a link?
> it being dangerous didn't stop them from offering the service for a profit.
Please do be cynical about how honest they were being — I mean, look at the whole of Big Tech right now — but the story they gave was self-consistent:
[Paraphrased!] (a) "We do research" (they do), "This research costs a lot of money" (it does), and (b) "As software devs, we all know what 'agile' is and how that keeps product aligned with stakeholder interest." (they do) "And the world is our stakeholder, so we need to release updates for the world to give us feedback." (???)
That last bit may be wishful thinking, I don't want to give the false impression that I think they can do no wrong (I've been let down by such optimism a few other times), but it is my impression of what they were claiming.
> Not seen this, can you give a link?
I was confusing GPT3 with GPT4. Here's the quote from the paper (emphasis mine) [1]:
> Given both THE COMPETITIVE LANDSCAPE and the safety implications of large-scale models like GPT-4, this report contains no further details about the architecture (including model size), hardware, training compute, dataset construction, training method, or similar.
[1] https://cdn.openai.com/papers/gpt-4.pdf
Thanks, 4 is much less surprising than 3.
> Before declaring that it shouldn't be near anyone with psychological issues, someone in the relevant field should study whether the positive impact on suicides is greater than negative or vice versa
That is the literal opposite of how medical treatment is regulated. Treatments should be tested and studied before availability to the general public. It's irresponsible in the extreme to suggest this.
Maybe it's causing even more deaths than we know, and these doesn't make the news either?
If we think this way, then we don't need to improve safety of anything (cars, trains, planes, ships, etc.) because we would need the big picture, though... maybe these vehicles cause death (which is awful), but it's also transporting people to their destinations alive. If there are that many people using these, I wouldn't be surprised if these actually transports some people with comfort, and that's not going to make the news.
> Maybe it's causing even more deaths than we know, and these doesn't make the news either?
Of course, and that's part of why I say that we need to measure the impact. It could be net positive or negative, we won't know if we don't find out.
> If we think this way, then we don't need to improve safety of anything (cars, trains, planes, ships, etc.) because we would need the big picture, though... maybe these vehicles cause death (which is awful), but it's also transporting people to their destinations alive. If there are that many people using these, I wouldn't be surprised if these actually transports some people with comfort, and that's not going to make the news.
I'm not advocating for not improving security, I'm arguing against a comment that said that "ChatGPT should be nowhere near anyone dealing with psychological issues", because it can cause death.
Following your analogy, cars objectively cause deaths (and not only of people with psychological issues, but of people in general) and we don't say that "they should be nowhere near a person". We improve their safety even though zero deaths is probably impossible, which we accept because they are useful. This is a big-picture approach.
people are overcomplicating this, the big picture is simple af:
if a therapist was ever found to have said to this to a suicidal person, they would be immediately stripped of their license and maybe jailed.
True. But it feels like a fairer comparison would be with a huge healthcare company that failed to vet one of its therapists properly, so a crazy pro-suicide therapist slipped through the net. Would we petition to shut down the whole company for this rare event? I suppose it would depend on whether the company could demonstrate what it is doing to ensure it doesn’t happen again.
Maybe you shouldn't shut down OpenAI over this. But each instance of a particular ChatGPT model is the same as all the others. This is like a company that has a magical superhuman therapist that can see a million patients a day. If they're found to be encouraging suicide, then they need to be stopped from providing therapy. The fact that this is the company's only source of revenue might mean that the company has to shut down over this, but that's just a consequence of putting all your eggs in one basket.
But you would have to be a therapist. If a suicidal person went up to a stranger and started a conversation, there would be no consequences. That's more analogous to ChatGPT.
If a therapist helped 99/100 patients but tacitly encouraged the 100th to commit suicide* they would still lose their license.
* ignoring the case of ethical assisted suicide for reasons of terminal illness and such, which doesn’t seem relevant to the case discussed here.
Let’s maybe not give the benefit of the doubt to the startup which has shown itself to have the moral scruples of vault-tec just because what they’re doing might work out fine for some of the people they’re experimenting on.
This entire comment section is full of wide eyed nonsense like this. It’s honestly frightening that we are even humoring this point of view.
Since as you say this utilitarian view is rather common, perhaps it would good to show _why_ this is problematic by presenting a counterargument.
The basic premise under GP's statements is that although not perfect, we should use the technology in such a way that it maximizes the well being of the largest number of people, even if comes at the expense of a few.
But therein lies a problem: we cannot really measure well being (or utility). This becomes obvious if you look at individuals instead of the aggregate: imagine LLM therapy becomes widespread and a famous high profile person and your (not famous) daughter end up in "the few" for which LLM therapy goes terribly wrong and commit suicide. The loss of the famous person will cause thousands (perhaps millions) people to be a bit sad, and the loss of your daughter will cause you unimaginable pain. Which one is greater? Can they even be be compared? And how many people with a successful LLM therapy are enough to compensate for either one?
Unmeasurable well-being then makes these moral calculations at best inexact and at worst completely meaningless. And if they are truly meaningless, how can they inform your LLM therapy policy decisions?
Suppose for the sake of the argument we accept the above, and there is a way to measure well being. Then would it be just? Justice is a fuzzy concept, but imagine we reverse the example above: many people lose their lives because of bad LLM therapy, but one very famous person in the entertainment industry is saved by LLM therapy. Let's suppose then that this famous persons' well being plus the millions of spectators' improved well-being (through their entertainment) is worth enough to compensate the people who died.
This means saving a famous funny person justifies the death of many. This does not feel just, does it?
There is a vast amount of literature on this topic (criticisms of utilitarianism).
This is either incredible satire or you’re a lunatic.
I'm just showing the logical consequences of utilitarian thinking, not endorsing it.
We have no problem doing this in other areas. Airline safety, for example, is analyzed quantitatively by assigning a monetary value to an individual human life and then running the numbers. If some new safety equipment costs more money than the value of the lives it would save, it's not used. If a rule would save lives in one way but cost more lives in another way, it's not enacted. A famous example of this is the rule for lap infants. Requiring proper child seats for infants on airliners would improve safety and save lives. It also increases cost and hassle for families with infants, which would cause some of those families to choose driving over flying for their travel. Driving is much more dangerous and this would cost lives. The FAA studied this and determined that requiring child seats would be a net negative because of this, and that's why it's not mandated.
There's no need to overcomplicate it. Assume each life has equal value and proceed from there.
Our standard approach for new medical treatments is to require proof of safety and efficacy before it's made available to the general public. This is because it's very, very easy for promising-looking treatments to end up being harmful.
"Before declaring that it shouldn't be near anyone with psychological issues" is backwards. Before providing it to people with psychological issues, someone should study whether the positive impact is greater than the negative.
Trouble is, this is such a generalized tool that it's very hard to do that.
> someone in the relevant field should study whether the positive impact on suicides is greater than negative or vice versa
we already have an approval process for medical interventions. are you suggesting the government shut ChatGPT down until the FDA can investigate it's use for therapy? because if so I can get behind that
> We would need the big picture, though... maybe it caused that death (which is awful) but it's also saving lives?
> drunk driving may kill a lot of people, but it also helps a lot of people get to work on time, so, it;s impossible to say if its bad or not,
They didn't say it was impossible, or that we should do nothing. Learn how to have a constructive dialogue, please.
You make a good point. While they absolutely and unequivocally said that it is currently impossible to tell whether the suicides are bad or not, they also sort of wondered aloud if in the future we might be able to develop a methodology to determine whether the suicides are bad or not. This is an important distinction becau
[dead]
I feel like this article is apropos:
https://www.lesswrong.com/posts/iGF7YcnQkEbwvYLPA/ai-induced...
Basically, the author tried to simulate someone going off into some sort of psychosis with a bunch of different models; and got wildly different results. Hard to summarize, very interesting read.
While I agree that AI shouldn't be sycophantic, and I also agree that the AI shouldn't have said those things.
> “Please don’t leave the noose out,” ChatGPT responded. “Let’s make this space the first place where someone actually sees you.”
That is not sycophantic behaviour, it is asserting a form of control of the situation. The bot made a direct challenge to the suggestion.
I only just realised this now reading your comment, but I hardly ever see responses that push back against what I say like that.
>should convince you that ChatGPT should be nowhere near anyone dealing with psychological issues.
Is that a debate worth having though?
If the tool is available universally it is hard to imagine any way to stop access without extreme privacy measures.
Blocklisting people would require public knowledge of their issues, and one risks the law enforcement effect, where people don’t seek help for fear that it ends up in their record.
> Is that a debate worth having though?
Yes. Otherwise we're accepting "OpenAI wants to do this so we should quietly get out of the way".
If ChatGPT has "PhD-level intelligence" [1] then identifying people using ChatGPT for therapy should be straightforward, more so users with explicit suicidal intentions.
As for what to do, here's a simple suggestion: make it a three-strikes system. "We detected you're using ChatGPT for therapy - this is not allowed by our ToS as we're not capable of helping you. We kindly ask you to look for support within your community, as we may otherwise have to suspend your account. This chat will now stop."
[1] https://www.bbc.com/news/articles/cy5prvgw0r1o
>Yes. Otherwise we're accepting "OpenAI wants to do this so we should quietly get out of the way".
I think it’s fair to demand that they label/warn about the intended usage, but policing it is distopic. Do car manufacturers immediately call the police when the speed limit is surpassed? Should phone manufacturers stop calls when the conversation deals with illegal topics?
I’d much rather regulation went the exact opposite way, seriously limiting the amount of analysis they can run over conversations, particularly when content is not deanonimised.
If there’s something we don’t want is OpenAI storing data about mental issues and potentially selling it to insurers for example. The fact that they could be doing this right now is IMO much more dangerous than tool misuse.
Those analogies are too imprecise.
Cars do have AEB (auto emergency braking) systems, for example, and the NHTSA is requiring all new cars to include it by 2029. If there are clear risks, it's normal to expect basic guardrails.
> I’d much rather regulation went the exact opposite way, seriously limiting the amount of analysis they can run over conversations, particularly when content is not deanonimised.
> If there’s something we don’t want is OpenAI storing data about mental issues and potentially selling it to insurers for example. The fact that they could be doing this right now is IMO much more dangerous than tool misuse.
We can have both. If it is possible to have effective regulation preventing an LLM provider from storing or selling users' data, nothing would change if there were a ban on chatbots providing medical advice. OpenAI already has plenty of things it prohibits in its ToS.
Are people using ChatGPT for therapy more vulnerable than people using it for medical or legal advice? From my experience talking about your problems to the unaccountable bullshit machine is not very different then the "real" therapy.
> Are people using ChatGPT for therapy more vulnerable than people using it for medical or legal advice?
Probably. If you are in therapy because you’re feeling mentally unstable, by definition you’re not as capable of separating bad advice from good.
But your question is a false dichotomy, anyway. You shouldn’t be asking ChatGPT for either type of advice. Unless you enjoy giving yourself psychiatric disorders.
https://archive.ph/2025.08.08-145022/https://www.404media.co...
> From my experience talking about your problems to the unaccountable bullshit machine is not very different then the "real" therapy.
From the experience of the people (and their families) who used the machine and killed themselves, the difference is massive.
I've been talking about my health problems to unaccountable bullshit machines my whole life and nobody ever seemed to think it was a problem. I talked to about a dozen useless bullshit machines before I found one that could diagnose me with narcolepsy. Years later out of curiosity I asked ChatGPT and it nailed the diagnosis.
Then...
Maybe the tool should not be available universally.
Maybe it should not be available to anyone.
If it cannot be used safely by a vulnerable class of people, and identifying that class of person sufficiently to block use by them, and its primary purpose is simply to bring OpenAI more profit, then maybe the world is better off without it being publicly available.
>If it cannot be used safely by a vulnerable class of people, and identifying that class of person sufficiently to block use by them
Should we stop selling kitchen knives, packs of cards or beer as well?
This is not a new problem in society.
>and its primary purpose is simply to bring OpenAI more profit
This is true for any product, unless you mean that it has no other purpose, which is trivially contradicted by the amount of people who decide to pay for it.
There's a qualitative difference between knives and publicly-available LLMs.
Knives aren't out there actively telling you "use me, slit those wrists, then it'll all be over".
I don’t disagree that they are clearly unhealthy for people who aren’t mentally well, I just differ on where the role of limiting access lies.
I think it’s up to the legal tutor or medical professionals to check that, and providers should at most be asked to comply with state restrictions, the same way addicts can figure on a list to ban access to a casino.
The alternative places openAI and others in the role of surveilling the population and deciding what’s acceptable, which IMO has been the big fuckup of social media regulation.
I do think there is an argument for how LLMs expose interaction - the friendliness that mimics human interaction should be changed for something less parasocial-friendly. More interactive Wikipedia and less intimate relationship.
Then again, the human-like behavior reinforces the fact that it’s faulty knowledge, and speaking in an authoritative manner might be more harmful during regular use.
Something is indeed NOT better than nothing. However, for those with mental and emotional issues (likely stemming from social / societal failures in the first place) anything would be better nothing because they need interaction and patience —two things these AI tools have in abundance.
Sadly there is no alternative. This is happening and there’s no going back. Many will be affected in detrimental ways (if not worse). We all go on with our lives because that which does not directly affect us is not our problem —is someone else’s problem/responsibility.
This discovery was probably not on purpose.
No one at OpenAI thought "Hey lets make a sucidie bot"
But this should show us how shit our society is to a lot of people, how much we need to help each other.
And i'm pretty sure that a good bot could def help
There are one-off things, and then there is exponential improvements - both in guardrails, and ChatGPT's ability to handle these discussions.
This type of discussion might be very much possible in ChatGPT in 6-24 months.
That was before they started acting to fix the problem. Please check the date.
This is just one example of the logical end state of grossly over prioritizing capital over labor in the economy.
Exactly, any company that offers chatbots to the public should do what Google did regarding suicide searches, remove harmful websites and provide info how to contact mental health professionals. Anything else would be corporate suicide (pun not intended).
I know that minors under age 13 are not allowed to use the app. But 13-18 is fine? Not sure why. Might also be worth looking into making apps like these 18+. Whether by law or by liability, if someone 20+ gets, say, food poisoning by getting recipes from chatgpt, then you can argue that it's the user's fault for not fact checking, but if a 15yo kid gets food poisoning, it's harder to argue that it's the kid's fault.
That's really really bad.
But also, how many people has it talked out of doing it? We need the full picture.
That is data that is impossible to get.
>OpenAI says over a million people talk to ChatGPT about suicide weekly
We don't know how many of those people would have gone through with a suicide but for LLMs.
Or how many were pushed down the path towards discussing suicide because they were talking to an LLM that directed them that way. It's entirely possible the LLMs are reinforcing bleak feelings with its constant "you're absolutely correct!" garbage.
im willing to bet that it reduces them at a statistical level. knee jerk emotional reaction to a hallucinaton isnt the way forward with these things
"One may think that something is better than nothing, but a bot enabling your destructive impulses is indeed worse than nothing."
And how would a layman know the difference?
If i desperately need help with mental item x and i have no clue how to get help, am very very ashamed for even mentioning to ask for help about mental item x or there are actually no resources available, i will turn to anything else than nothing. Because item x still exists and is making me suffer 24/7.
At least the bot pretends to listen, some humans cannot even do that.
I think you're being too generous to the idea that it could help without any evidence.
If we assume that there's therapeutic value to bringing your problems out then a diary is a better tool. And if we believe that it's the feedback what's helping, well, we have cases of ChatGPT encouraging people's psychosis.
We know that a layman often doesn't know the difference between what's helpful and what isn't - that's why loving relatives end up often enabling people's addictions thinking they're helping. But I'd argue that a system that confidently gives mediocre feedback at best and actively psychotic at worst is not a system that should be encouraged simply because it's cheap.
I also wanted to snarkily write "even a dog would be better", but the more I thought about it the more I realized that yes, a dog would probably be a solid alternative.
OpenAI tried to get rid of the excessively sycophantic model (4o) but there was a massive backlash. They eventually relented and kept it as a model offering in ChatGPT.
OpenAI certainly has made mistakes with its rollouts in the past, but it is effectively impossible to keep everyone with psychological issues away from a free online web app.
>ChatGPT should be nowhere near anyone dealing with psychological issues.
Should every ledge with a >10ft drop have a suicide net? How would you imagine this would be enforced, requiring everyone who uses ChatGPT to agree to an "I am mentally stable" provisio?
Do you think that it's free and available to anyone means it doesn't have any responsibility to users? Or have any responsibility for how it's used, or what it says?
It’s an open problem in AI development to make sure LLM’s never say the “wrong” thing. No matter what, when dealing with a non-deterministic system, one can’t anticipate or oversee the moral shape of all its outputs. There are a lot of things however that you can’t get ChatGPT to say, and they often ban users after successive violations, so it isn’t true that they are fully abdicating responsibility for the use and outputs of their models in realms where the harm is tractable.
This is not suprising at all. Having gone through therapy a few years back, I would have had a chat with LLMs if I was in a poor mental health situation. There is no other system that is available at scale, 24x7 on my phone.
A chat like this is not a solution though, it is an indicator that our societies have issues is large parts of our population that we are unable to deal with. We are not helping enough people. Topics like mental health are still difficult to discuss in many places. Getting help is much harder.
I do not know what OpenAI and other companies will do about it and I do not expect them to jump in to solve such a complex social issue. But perhaps this inspires other founders who may want to build a company to tackle this at scale. Focusing on help, not profits. This is not easy, but some folks will take such challenges. I choose to believe that.
> There is no other system that is available at scale, 24x7 on my phone.
https://en.wikipedia.org/wiki/Suicide_and_Crisis_Lifeline
This is a good point. Should we ask why so many people are still going to ChatGPT? Do the existing systems get so many users interacting with them?
Someone elsewhere in the thread pointed out that it's truly hard to open up to another human, especially face to face. Even if you know they're a professional, it's awkward, it can be embarrassing, and there's stigma about a lot of things people ideally go to therapy for.
I mean, hell, there's people out there with absolutely terrible dental health who are avoiding going to the dentist because they're ashamed of it, even though logically, dentists have absolutely seen worse, and they're not there to judge, they're just there to help fix the problem.
I choose to believe that too. I think more people are interested than we’d initially believe. Money restrains many of our true wants.
Sidebar — I do sympathize with the problem being thrust upon them, but it is now theirs to either solve or refuse.
A chat like this is all you’ve said and dangerous, because they play a middle ground: Presenting a machine can evaluate your personal situation and reason about it, when in actuality you’re getting third party therapy about someone else’s situation in /r/relationshipadvice.
We are not ourselves when we are fallen down. It is difficult to parse through what is reasonable advice and what is not. I think it can help most people but this can equally lead to a disaster… It is difficult to weigh.
It's worse than parroting advice that's not applicable. It tells you what you told it to tell you. It's very easy to get it to reinforce your negative feelings. That's how the psychosis stuff happens, it amplifies what you put into it.
"We are not ourselves when we are fallen down" - hits hard. I really hope this is a calling for folks who will care.
Sorry - what's with "I choose to believe"? Either you believe something or not, there is no choice. Maybe you mean "hope". Or "wishful thinking".
Yes, there is a choice; Belief doesn’t just happen to you, you choose to believe before you actually do
This makes no sense at all to me. You can choose to gather evidence and evaluate that evidence, you can choose to think about it, and based on that process a belief will follow quite naturally. If you then choose to believe something different, it's just self-deception.
If you look at the number of weekly open ai users, this is just the law of big numbers at play.
You are right and it gives us an chance to do something about it. We always had data about people who are struggling but now we see how many are trying to reach out for advice or help.
> A chat like this is not a solution though, it is an indicator that our societies have issues
Correct, many of which are directly, a skeptic might even argue deliberately, exacerbated by companies like OpenAI.
And yet your proposal is
> a company to tackle this at scale.
What gives you the confidence that any such company will focus consistently, if at all,
> on help, not profits
Given it exists in the same incentive matrix as any other startup? A matrix which is far less likely to throw one fistfuls of cash for a nice-sounding idea now than it was in recent times. This company will need to resist its investors' pressure to find returns. How exactly will it do this? Do you choose to believe someone else has thought this through, or will do so? At what point does your belief become convenient for people who don't share your admirably prosocial convictions?
Is OpenAI taking steps to reduce access to mental healthcare in an attempt to force more people to use their tools for such services? Or do you mean in a more general sense that any companies that support the Republican Party are complicit in exacerbating the situation? At least that one has a clear paper trail.
HIPAA anybody?
(1) they probably shouldn't even have that data
(2) they shouldn't have it lying around in a way that it an be attributed to particular individuals
(3) imagine that it leaks to the wrong party, it would make the hack of that Finnish institution look like child's play
(4) if people try to engage it in such conversations the bot should immediately back out because it isn't qualified to have these conversations
(5) I'm surprised it is that little; they claim such high numbers for their users that this seems low.
In the late 90's when ICQ was pretty big we experimented with a bot that you could connect to that was fed in the background by a human. It didn't take a day before someone started talking about suicide to it and we shut down the project realizing that we were in no way qualified to handle human interaction at that level. It definitely wasn't as slick or useful as ChatGPT but it did well enough and responded naturally (more naturally than ChatGPT) because there was a person behind it that could drive 100's of parallel conversations.
If you give people something that seems to be a listening ear they will unburden themselves on that ear regardless of the implementation details of the ear.
HIPAA only applies to covered healthcare entitites. If you walk into a McDonalds' and talk about your suicidal ideation with the cashier, that's not HIPAA covered.
To become a covered entity, the business has either work with a healhcare provider, health data trasmiter, or do business as one.
Notably, even in the above case, HIPAA only applies to the healthcare part of the entity. So if McDonald's collocated pharmacies in their restaurants, HIPAA would only apply to the pharmacists, not the cashiers.
That's why you'll see in connivence stores with pharmacies, the registers are separated so healthcare data doesn't go to someone who isn't covered by HIPAA.
**
As for how ChatGPT gets these stats... when you talk about a sensitive or banned topic like suicide, their backend logs it.
Originally, they used that to cut off your access so you wouldn't find a way to cause a PR failure.
So many misconceptions about HIPAA would disappear if people just took the effort to unpack the acronym.
Health Insurance Portability and Accountability Act, for us non Americans
Arguably, if you start giving answers to these kind of questions your chatbot just became a medical device.
Under Medical Device Regulation in the EU, the main purpose of the software needs to be medical for it to become a medical device. In ChatGPT's case, this is not the primary use case.
Same with fitness trackers. They aren't medical devices, because that's not their purpose, but some users might use them to track medical conditions.
Then the McDonalds cashier also becomes a medical practitioner the moment they tell you that killing yourself isn't the answer. And if I tell my friend via SMS that I am thinking about suicide, do both our phones now also become HIPAA-covered medical devices?
There is nothing arguable about it. No it did not.
What about a medicine book? Is that also a medical device?
I don't know about HIPAA, but that there is that little body of criminal legislature talking about unathorised practice of medicine ?
Privacy is vital, but this isn't covered under HIPAA. As they are not a covered entity nor handling medical records, they're beholden to the same privacy laws as any other company.
HIPAA's scope is actually basically nonexistent once you get away from healthcare providers, insurance companies, and the people that handle their data/they do business with. Talking with someone (even a company) about health conditions, mental health, etc. does not make them a medical provider.
> Talking with someone (even a company) about health conditions, mental health, etc. does not make them a medical provider.
Also not when the entity behaves as though they are a mental health service professional? At what point do you put the burden on the apparently mentally ill person to know better?
Google, OpenAI, Anthropic don't advertise any of their services as medical so why?
You Google your symptoms constantly. You read from WebMD or Wiki drug pages. None of these should be under HIPAA.
That line of reasoning would just lead to every LLM message and every second comment on the internet starting with the sentence "this is not medical advice". It would do nothing but add another layer of noise to all communication
You're not putting the burden on them. They don't need to comply with HIPAA. But you can't just turn people into healthcare providers who aren't them and don't claim to be them.
> HIPAA anybody?
Maybe. Going on a tangent: in theory GMail has access to lots of similar information---just by having approximately everyone's emails. Does HIPAA apply to them? If not, why not?
> If you give people something that seems to be a listening ear they will unburden themselves on that ear regardless of the implementation details of the ear.
Cf. Eliza, or the Rogerian therapy it (crudely) mimics.
> Maybe. Going on a tangent: in theory GMail has access to lots of similar information---just by having approximately everyone's emails. Does HIPAA apply to them? If not, why not?
That's a good question.
Intuitively: because it doesn't attempt to impersonate a medical professional, nor does it profess to interact with you on the subject matter at all. It's a communications medium, not an interactive service.
> if people try to engage it in such conversations the bot should immediately back out because it isn't qualified to have these conversations
For a lot of people, especially in poorer regions, LLMs are a mental health lifeline. When someone is severely depressed they can lay in bed the whole day without doing anything. There is no impulse, as if you tried starting a car and nothing happens at all, so you can forget about taking it to the mechanic in the first place by yourself. Even in developed countries you can wait for a therapist appointment for months, and that assumes you navigated a dozen therapists that are often not organized in a centralized manner. You will get people killed like this, undoubtedly.
LLMs are far beyond the point of leading people into suicidal actions, on the other hand. At the very least they are useful to bridge the gap between suicidal thoughts appearing and actually getting to see a therapist
Sure, but you could also apply this reasoning to a blank sheet of paper. But while it's absurd to hold the manufacturer of the paper accountable for what people write on it, it makes sense to hold OpenAI accountable for their chatbots encouraging suicide.
Tangent but now I’m curious about the bot, is there a write-up anywhere? How did it work? If someone says “hi”, what did the bot respond and what did the human do? I’m picturing ELIZA with templates with blanks a human could fill in with relevant details when necessary.
Basically Levenshtein on previous responses minus noise words. So if the response was 'close enough' then the bot would use a previously given answer, if it was too distant the human-in-the-loop would get pinged with the previous 5 interactions as context to provide a new answer.
Because the answers were structured as a tree every ply would only go down in the tree which elegantly avoided the bot getting 'stuck in a loop'.
The - for me at the time amazing, though linguists would have thought it trivial - insight was how incredibly repetitive human interaction is.
If there is somebody in current year that still thinks they would not store, process/train and use/sell all data, they do probably need to see a doctor.
As others have stated HIPAA applies to healthcare organizations.
Obligating everyone to keep voluntarily disclosed health statements confidential would be silly.
If I told you that I have a medical condition, right here on HN -- would it make sense to obligate you and everyone else here keep it a secret?
No, obviously it would not. But if we pretended to be psychiatrists or therapists then we should be expected to behave as such with your data if given to us in confidence rather than in public.
> (4) if people try to engage it in such conversations the bot should immediately back out because it isn't qualified to have these conversations
There is nothing in the world that OpenAI is qualified to talk about, so we might as well just shut it down.
I'm in favor; any objections?
> we shut down the project realizing that we were in no way qualified to handle human interaction at that level
Ah, when people had a spine and some sense of ethics, before everything dissolved in a late stage capitalism all is for profit ethos. Even yourself is a "brand" to be monetised, even your body is to be sold.
We deserve our upcoming demise.
> It is estimated that more than one in five U.S. adults live with a mental illness (59.3 million in 2022; 23.1% of the U.S. adult population).
https://www.nimh.nih.gov/health/statistics/mental-illness
Most people don't understand just how mentally unwell the US population is. Of course there are one million talking to ChatGPT about suicide weekly. This is not a surprising stat at all. It's just a question of what to do about it.
At least OpenAI is trying to do something about it.
Are you sure ChatGPT is the solution? It just sounds like another "savior complex" sell spin from tech.
1. Social media -> connection 2. AGI -> erotica 3. Suicide -> prevention
All these for engagement (i.e. addiction). It seems like the tech industry is the root cause itself trying to masquerade the problem by brainwashing the population.
https://news.ycombinator.com/item?id=45026886
Whether solution or not, fact is AI* is the most available entity for anyone who has sensitive issues they'd like to share. It's (relatively) cheap, doesn't judge, is always there when wanted/needed and can continue a conversation exactly where left off at any point.
* LLM would of course be technically more correct, but that term doesn't appeal to people seeking some level of intelligent interaction.
I personally take no opinion about whether or not they can actually solve anything, because I am not a psychologist and have absolutely no idea how good or bad ChatGPT is at this sort of thing, but I will say I'd rather the company at least tries to do some good given that Facebook HQ is not very far from their offices and appears to have been actively evil in this specific regard.
> but I will say I'd rather the company at least tries to do some good given that Facebook HQ is not very far from their offices and appears to have been actively evil in this specific regard.
Sure! let's take a look at OpenAI's executive staff to see how equipped they are to take a morally different approach than Meta.
Fidji Simo - CEO of Applications (formerly Head of Facebook at Meta)
Vijaye Raji - CTO of Applications (formerly VP of Entertainment at Meta)
Srinivas Narayanan - CTO of B2B Applications (formerly VP of Engineering at Meta)
Kate Rouch - Chief Marketing Officer (formerly VP of Brand and Product Marketing at Meta)
Irina Kofman - Head of Strategic Initiatives (formerly Senior Director of Product Management for Generative AI at Meta)
Becky Waite - Head of Strategy/Operations (formerly Strategic Response at Meta)
David Sasaki - VP of Analytics and Insights (formerly VP of Data Science for Advertising at Meta)
Ashley Alexander - VP of Health Products (formerly Co-Head of Instagram Product at Meta)
Ryan Beiermeister - Director of Product Policy (formerly Director of Product, Social Impact at Meta)
The general rule of thumb is this.
When given the right prompts, LLMs can be very effective at therapy. Certainly my wife gets a lot of mileage out of having ChatGPT help her reframe things in a better way. However "the right prompts" are not the ones that most mentally ill people would choose for themselves. And it is very easy for ChatGPT to become part of a person's delusion spiral, rather then be a helpful part of trying to solve it.
Is it better or worse than alternatives? Where else would a suicidal person turn, a forum with other suicidal people? Dry Wikipedia stats on suicide? Perhaps friends? Knowing how ChatGPT replies to me, I’d have a lot of trouble getting negativity influenced by it, any more than by yellow pages. Yeah, it used to try more to be your friend but GPT5 seems pretty neutral and distant.
I think that you will find a lot of strong opinions, and not a lot of hard data. Certainly any approach can work out poorly. For example antidepressants come with warnings about suicide risk. The reason is that they can enable people to take action on their suicidal feelings, before their suicidal feelings are fixed by the treatment.
I know that many teens turn to social media. My strong opinions against that show up in other comments...
> The reason is that they can enable people to take action on their suicidal feelings, before their suicidal feelings are fixed by the treatment.
I see that explanation for the increased suicide risk caused by antidepressants a lot, but what’s the evidence for it?
It doesn’t necessarily have to be a study, just a reason why people believe it.
Case studies support this. Which is a fancy way to say, "We carefully documented anecdotal reports and saw what looks like a pattern."
There is also a strong parallel to manic depression. Manic depressives have a high suicide risk, and it usually happens when they are coming out of depression. With akathisia (fancy way to say inner restlessness) being the leading indicator. The same pattern is seen with antidepressants. The patient gets treatment, develops akathisia, then attempts suicide.
But, as with many things to do with mental health, we don't really know what is going on inside of people. While also knowing that their self-reports are, shall we say, creatively misleading. So it is easy to have beliefs about what is going on. And rather harder to verify them.
The article links to the case of Adam Raine, a depressed teenager who confided in ChatGPT for months and committed suicide. The parents blame ChatGPT. Some of the quotes definitely sound like encouraging suicide to me. It’s tough to evaluate the counterfactual though. Article with more detail: https://www.npr.org/sections/shots-health-news/2025/09/19/nx...
Holy shit this is so fucking wrong and dangerous. No, LLMs are not and cannot be “very effective at therapy”.
Can you give just a little bit more effort explaining why you say that?
You know, usually it’s positive claims which are supposed to be substantiated, such as the claim that “LLMs can be good at therapy”. Holy shit, this thread is insane.
You don't seem to understand how burden of proof works.
My claim that LLMS can do effective therapeutic things is a positive claim. My report of my wife's experience is evidence. My example of something it has done for her is something that other people, who have experienced LLMs, can sanity check and decide whether they think this is possible.
You responded by saying that it is categorically impossible for this to be true. Statements of impossibility are *ALSO* positive claims. You have provided no evidence for your claim. You have failed to meet the burden of proof for your position. (You have also failed to clarify exactly what you consider impossible - I suspect that you are responding to something other than what I actually said.)
This is doubly true given the documented effectiveness of tools like https://www.rosebud.app/. Does it have very significant limitations? Yes. But does it deliver an experience that helps a lot of people's mental health? Also, yes. In fact that app is recommended by many therapists as a complement to therapy.
But is it a replacement for therapy? Absolutely not! As they themselves point out in https://www.rosebud.app/care, LLMs consistently miss important things that a human therapist should be expected to catch. With the right prompts, LLMs are good at helping people learn and internalize positive mental health skills. But that kind of use case only covers some of the things that therapists do for you.
So LLMs can and do to effective therapeutic things when prompted correctly. But they are not a replacement for therapy. And, of course, an unprompted LLM is unlikely to on its own do the potentially helpful things that it could.
“My wife feels that…” and “people we paid to endorse our for-profit app said…” is not evidence no matter how much you want it to be.
No, it is evidence. It is evidence that can be questioned and debated, but it is still evidence.
Second, you misrepresent. The therapists that I have heard recommend Rosebud were not paid to do so. They were doing so because they had seen it be helpful.
Furthermore you have still not clarified what it is you think is impossible, or provided evidence that it is impossible. Claims of impossibility are positive assertions, and require evidence.
You added nothing to the thread. Just get out.
lol that’s rich given we’re in a thread about using ChatGPT as a therapist.
I wasn't saying your position is wrong, just that it doesn't really make a good contribution to the discussion.
I don't think "doing something about it" equals to "being a solution". Tackling the problems of the homeless, people operate a lot of food banks. Those don't even begin to solve homelessness, yet it's a precious resource, so, "doing something".
I agree that the tech industry is the root cause of a lot of mental illness.
But social media is a far bigger concern than AI.
Unless, of course, you count the AI algorithms that TikTok uses to drive engagement, which in turn can cause social contagion...
> Unless, of course, you count the AI algorithms that TikTok uses to drive engagement, which in turn can cause social contagion...
I have noticed that TikTok can detect a depressive episode within ~a day of it starting (for me), as it always starts sending me way more self harm related content
Are you quite certain the depressive episode developed organically and Tiktok reacted to it? Maybe the algorithm started subtly on that path two days before you noticed the episode and you only realize once it starts showing self-harm content?
Hmm, that's quite possible (and concerning to think about)
It had been showing me depressive content for days / weeks beforehand, during the start of the episode, however the sh content only started (Or I only noticed it) a few hours after I had a relapse, so the timing was rather uncanny
AI is going to be more impactful than social media I'm afraid. But the two together just might be catastrophic for humanity.
You actually need to add a loop in there between the suicide and erotica steps.
[dead]
[flagged]
yikes
Bruh
ChatGPT/Claude can be absolutely brilliant in supportive, every day therapy, in my experience. BUT there are few caveats: I'm in therapy for a long time already (500+ hours), I don't trust it with important judgements or advice that goes counter to what I or my therapists think, and I also give Claude access to my diary with MCP, which makes it much better at figuring the context of what I'm talking about.
Also, please keep in mind "supportive, every day". It's talking through stuff that I already know about, not seeking some new insights and revelations. Just shooting the shit with an entity which is booted with well defined ideas from you, your real human therapist and can give you very predictable, just common sense reactions that can still help when it's 2am and you have nobody to talk to, and all of your friends have already heard this exact talk about these exact problems 10 times already.
How do you connect your diary to an LLM? I've been struggling with getting an MCP for Evernote setup.
I don’t use it for therapy, but my notes and journal are all just Logseq markdown. I’ve got a claude code instance running on my NAS with full two way access to my notes. It can read everything and can add new entries and tasks for me.
~11% of the US population is on antidepressants. I'm not, but I personally know the biggest detriment to my mental health is just how infrequently I'm in social situations. I see my friends perhaps once every few months. We almost all have kids. I'm perfectly willing and able to set aside more time than that to hang out, but my kids are both very young still and we're aren't drowning in sports/activities yet (hopefully never...). For the rest it's like pulling teeth to get them to do anything, especially anything sent via group message. It's incredibly rare we even play a game online.
Anyways, I doubt I'm alone. I certainly know my wife laments the fact she rarely gets to hang out with her friends too, but she at least has one that she walks with once a week.
> We almost all have kids.
Maybe that? I see most my close friends daily and we all do not have kids.
Small kids do this to everybody. The only solution - if you have good family nearby, use them as parenting services from time to time, to get me-time, couple-time and social time with friends. Buy them gift or vacation as return. Its incredibly damaging to marriage which literally transforms overnight from this rosy great easy-to-manage relationship into almost daily hardship, stress and nerves. Alternative is a (good) nanny.
People have issues admitting it even when its visible for everybody around, like some sort of admission you are failing as a parent, partner, human being and whatnot. Nope, we are just humans with limited energy and even good kids can siphon it well beyond 100% continuously, that's all.
Now I am not saying be a bad parent, in contrary, to reach you maximum even as a parent and partner, you need to be in good shape mentally, not running on fumes continuously.
Life without kids is really akin to playing game of life on easiest settings. Much less rewarding at the end, but man that freedom and simplicity... you appreciate it way more once you lose it. The way kids can easily make any parent very angry is simply not experienced elsewhere in adult life... I saw this many times on otherwise very chill people and also myself & my wife. You just can't ever get close to such fury and frustration dealing with other adults.
I'm surprised it's that low to be honest. By their definition of any mental illness, it can be anything from severe schizophrenia to mild autism. The subset that would consider suicide is a small slice of that.
Would be more meaningful to look at the % of people with suicidal ideation.
> By their definition of any mental illness, it can be anything from severe schizophrenia to mild autism.
Depression, schizophrenia, and mild autism (which by their accounting probably also includes ADHD) should NOT be thrown together into the same bucket. These are wholly different things, with entirely different experiences, treatments, and management techniques.
Mild/high-functional autism, as far as I understand it, is not even an illness but a variant of normalcy. Just different.
At that level it in part depends on your point of view: There's a general requirement in the DSM for a disorder to be something that is causing distress to the patient or those around them, or an inability to function normally in society. So someone with the same symptoms could fall under those criteria or not depending on their outlook and life situation.
> Mild/high-functional autism, as far as I understand it, is not even an illness but a variant of normalcy. Just different.
As someone who actually has an ASD diagnosis, and also has kids with that diagnosis too, this kind of talk irritates me…
If someone has a clinical diagnosis of ASD, they have a psychiatric diagnosis per the DSM/ICD. If you meet the criteria of the “Diagnostic and Statistical Manual of Mental Disorders”, surely by that definition you have a “mental disorder”… if you meet the criteria of the “International Classification of Diseases”, surely by that definition you have a “disease”
Is that an “illness”? Well, I live in the state of NSW, Australia, and our jurisdiction has a legal definition of “mental illness” (Mental Health Act 2007 section 4):
"mental illness" means a condition that seriously impairs, either temporarily or permanently, the mental functioning of a person and is characterised by the presence in the person of any one or more of the following symptoms-- (a) delusions, (b) hallucinations, (c) serious disorder of thought form, (d) a severe disturbance of mood, (e) sustained or repeated irrational behaviour indicating the presence of any one or more of the symptoms referred to in paragraphs (a)-(d).
So by that definition most people with a mild or moderate “mental illness” don’t actually have a “mental illness” at all. But I guess this is my point-this isn’t a question of facts, just of how you choose to define words.
Sorry, I didn’t mean to possibly offend or irritate. And thank you for patiently explaining. TIL.
Your comment wasn’t wrong. Neither is the reply wrong to be frustrated about how the world understands this complex topic.
You’re talking about autism. The reply is about autism spectrum DISORDER.
Different things, exacerbated by the imprecise and evolving language we use to describe current understanding.
An individual can absolutely exhibit autistic traits, whilst also not meeting the diagnostic criteria for the disorder.
And autistic traits are absolutely a variant of normalcy. When you combine many together, and it affects you in a strongly negative way, now you meet ASD criteria.
Here’s a good description: https://www.autism.org.uk/advice-and-guidance/what-is-autism...
> OpenAI is trying to do something about it.
Ha good one
They are collecting training data for ads & erotica.
It sounds like you’re feeling down. Why don’t you pop a couple Xanax(tm) and shop on Amazon for a while, that always makes you feel better. Would you like me to add some Xanax(tm) to your shopping cart to help you get started?
Honestly, ChatGPT reminding you to take your meds would be a huge positive for ADHD.
Been there. Two tips:
Set an alarm on your phone for when you should take your meds. Snooze if you must, but don't turn off /accept the alarm until you take them.
Put daily meds in cheap plastic pillbox container labelled Sunday-Saturday (which you refill weekly). The box will help you notice if you skipped a day or can't remember if you took them or not today. Seeing pills not taken from past days also serves to alert you if/that your "remember-to-take-them" system is broken and you need to make conscious adjustmemts to it.
Yeah this is a problem which definitely requires an H100
They're doing something about it alright, they're monetizing their pain for shareholder gainz!
Sure but your therapist is also monetizing your pain for his own gain. Either A.I therapy works (e.g can provide good mental relief) or it doesn't. I tend to think it's gonna be amazing at those things talking from experience (very rough week with my mom's health deterioriating fast, did a couple of sessions with Gemini that felt like I'm talking to a therapist). Perhaps it won't work well for hard issues like real mental disorders but guess what human therapists are very often also not great at treating people with serious issue.
Depression, ADHA, Schizophrenia are not a "very rough week"
But one is a company ran by sociopaths that have no empathy and couldn't care less about anything but money, while the other is a human that at least studied the field all their life.
> But one is a company ran by sociopaths that have no empathy and couldn't care less about anything but money, while the other is a human that at least studied the field all their life.
Unpacking your argument you make two points:
1) The human has studied all his life; yes, some humans study and work hard. I have also studied programming half my life and it doesn't mean A.I can't make serious contributions in programming and that A.I won't keep improving.
2) These companies, or OpenAI in particular, are untrustworthy many grabbing assholes. To this I say if they truly care about money they will try to do a good job, e.g provide an A.I that is reliable, empathetic and that actually help you get on with life. If they won't - a competitor will. That's basically the idea of capitalism and it usually works.
If it follows the Facebook/Meta playbook, it now has a new feature label for selling ads.
This stat is for AMI, for any mental disorder ranging from mild to severe. Anyone self-reporting a bout of anxiety or mild depression qualifies as a data point for mental illness. For suicide ideation the SMI stat is more representative.
There are 800 million weekly active users on ChatGPT. 1/800 users mentioning suicide is a surprisingly low number, if anything.
> 1/800 users mentioning suicide…
“conversations that include explicit indicators of potential suicidal planning or intent.”
Sounds like more than just mentioning suicide. Also it’s per week, which is a pretty short time interval.
But they may well be overreporting suicidal ideation...
I was asking a silly question about the toxicity of eating a pellet of Uranium, and ChatGPT responded with "... you don't have to go through this alone. You can find supportive resources here[link]"
My question had nothing to do with suicide, but ChatGPT assumed it did!
I got a suicide warning message on Pinterest by searching for a particular art style.
We don't know how that search was done. For example, "I don't feel my life is worth living." Is that potential suicidal intent?
Also these numbers are small enough that they can easily be driven by small groups interacting with ChatGPT in unexpected ways. For example if the song "Everything I Wanted" by Billie Eilish (2019) went viral in some group, the lyrics could easily show up in a search for suicidal ideation.
That said, I don't find the figure at all surprising. As has been pointed out, an estimated 5.3% of Americans report having struggled with suicidal ideation in the last 12 months. People who struggle with suicidal ideation, don't just go there once - it tends to be a recurring mental loop that hits over and over again for extended periods. So I would expect the percentage who struggled in a given week to be a large multiple of the simplistic 5.3% divided by 52 weeks.
In that light this statistic has to be a severe underestimate of actual prevalence. It says more about how much people open up to ChatGPT, than it does to how many are suicidal.
(Disclaimer. My views are influenced by personal experience. In the last week, my daughter has struggled with suicidal ideation. And has scars on her arm to show how she went to self-harm to try to hold the thoughts at bay. I try to remain neutral and grounded, but this is a topic that I have strong feelings about.)
>Most people don't understand just how mentally unwell the US population is
The US is no exception here though. One in five people having some form of mental illness (defined in the broadest possible sense in that paper) is no more shocking than observing that one in five people have a physical illness.
With more data becoming available through interfaces like this it's just going to become more obvious and the taboos are going to go away. The mind's no more magical or less prone to disease than the body.
> At least OpenAI is trying to do something about it.
They can certainly say that their chat bot has a documented history of attempting to reduce the number of suicidal people.
Suicide is not a mental illness.
Unless you're in that soviet man-hating mindset that put every failed suicide in mental institution.
I am one of these people (mentally ill - bipolar 1). I’ve seen others others via hospitalization that i would simply refuse to let them use ChatGPT because it is so sycophantic and would happily encourage delusions and paranoid thinking given the right prompts.
> At least OpenAI is trying to do something about it.
In this instance it’s a bit like saying “at least Tesla is working on the issue” after deploying a dangerous self driving vehicle to thousands.
edit: Hopefully I don't come across as overly anti-llm here. I use them on a daily basis and I truly hope there's a way to make them safe for mentally ill people. But history says otherwise (facebook/insta/tiktok/etc.)
Yep, it's just a question of whether on average the "new thing" is more good than bad. Pretty much every "new thing" has some kind of bad side effect for some people, while being good for other people.
I would argue that both Tesla self driving (on the highway only), and ChatGPT (for professional use by healthy people) has been more good than bad.
This is precisely the case.
I thought it would be limited when the first truly awful thing inspired by an LLM happened, but we’ve already seen quite a bit of that… I am not sure what it will take.
I am honestly surprised it’s only roughly 1 million per week. I would have believed a number at least an order magnitude higher.
We need to monitor americans' ai usage and involuntarily commit them if they show anomalies.
Allowing open source ai models without these safety measures in place is irresponsible and models like qwen or deepseek should be banned. (/s)
Those numbers are staggering.
It seems like people here have already made up their mind about how bad llms are. So just my anecdote here, it helped me out of some really dark places. Talking to humans (non psychologists) had the opposite effect. Between a non professional and an llm, i'd pick llm for myself. Others should definitely seek help.
It's a matter of trust and incentives. How can you trust a program curated by an entity with no accountability? A therapist has a personal stake in helping patients. An LLM provider does not.
Seeking help should not be so taboo as people are resorting to doing it alone at night while no one is looking. That is society loudly saying "if you slip off the golden path even a little your life is over". So many people resorting to LLMs for therapy is a symptom of a cultural problem, it's not a solution to a root issue.
On the other hand, if someone really wants to leave, they should be allowed to.
"Seeking help" goes both ways.
How can I trust a therapist that has a financial incentive to keep me seeing them?
Over the last five years I've been in and out of therapy and 2/3 of my therapists have "graduated me" at some point in time, stating that their practice didn't see permanent therapy as a good solution. I don't think all therapists view it this way.
chat gpt has a financial incentive to keep you as a weekly active user. Not really any different.
$20 a month vs a few hundred per session
I'll start with a direct response, because otherwise I suspect my answer may come across as too ... complex.
> How can I trust a therapist that has a financial incentive to keep me seeing them?
The direct response: I hope the commenter isn't fixated on this framing of the question, because I don't think it is a useful framing. [1] What is a better framing, then? I'm not going to give a simple answer. My answer is more like a process.
I suggest refining one's notion of trust to be "I trust Person A to do {X, Y, Z} because of what I know about them (their incentives, professional training, culture, etc)."
Shift one's focus and instead ask: "What aspects of my therapist are positives and/or lead me to trust their advice? What aspects are negative and/or lead me to not trust their advice?" Put this in writing and put some time into it.
One might also want to journal on "How will I know if therapy is helping? What are my goals?" By focusing on this, I think answers relating to "How much is my therapist helping?" will become easier to figure out.
[1] I think it is not useful because both because it is loaded and because it is overly specific. Instead, focus on figuring out what actions one should take. From here, the various factors can slot in naturally.
Perhaps then the solution is that LLMs need to be aware when the chat crosses a threshold and becomes talk of suicide.
When I was getting my Education degree, we were told that, as teachers, to take talk of suicide by students extremely seriously. If a student talks about suicide, a professional supposedly asks, "Do you know how you're going to do it?" If there is an affirmative response, the danger is real.
I suspect that comes from examining case studies?
LLMs are quite good at psychological questions. I've compared AI with tharapy professional responses and they matched 80%. It is is easier to open to it, be frank (so fear of regection or ridicule is no more). And most importantly some people don't have access to proper pool of therapists (as yet you need to "match" with the one who resonates with you) making LLMs a bliss. There is place for both human and LLM psyhelp.
I'm glad you carried through that period.
I've heard this a lot, and personally I've had a lot of good success with a prompt that explains some of my personality traits and asking it to work through a stressful situation for me. The good thing with this rather than a therapist/coach is that it understands a lot of the subject matter and can help with the detail.
I wonder if really what we need is some sort of supervised mode, where users chat with it but a trained professional reviews the transcripts and does a weekly/monthly/urgent checkin with them. This is how (some? most?) therapists work themselves, they take their notes to another therapist and go through them.
Given how ideologically captured the therapist industry is now I think it’s very hard to say that an LLM do such things is objectively worse.
Keep in mind the purpose of all this “research” and “improvement” is just so OpenAI can have their cake (advertise their product as psychological supporter) and eat it too (avoid implementing any safeguards that would be required in any product for psychological support, but harmful for data collection). They just want to tell you that so many people write bad things it is inevitable :( what can we do :( proper handling would hurt our business model too much :(((
Surprised it's so low. There are 800 million users and the typical developed country has around 5±3% of the population[1] reporting at least one notable instance of suicidal feelings per year.
.
[1] Anybody concerned by such figures (as one justifiably should be without further context) should note that suicidality in the population is typically the result of their best approximation of the rational mind attempting to figure out an escape from a consistently negative situation under conditions of very limited information about alternatives, as is famously expressed in the David Foster Wallace quote on the topic.
The phenomenon usually vanishes after gaining new, previously inaccessible information about potential opportunities and strategies.
> best approximation of the rational mind attempting to figure out an escape from a consistently negative situation under conditions of very limited information about alternatives
I dislike this phrasing, because it implies things can always get better if only the suicidal person were a bit less ignorant. The reality is there are countless situations from which the entire rest of your life is 99.9999% guaranteed to constitute of a highly lopsided ratio of suffering to joy. An obvious example are diseases/disabilities in which pain is severe, constant, and quality of life is permanently diminished. Short of hoping for a miracle cure to be discovered, there is no alternative and it is perfectly rational to conclude that there is no purpose to continuing to live in that circumstance, provided the person in question lives with their own happiness as a motivating factor.
Less extreme conditions than disability can also lead to this, where it's possible things can get better but there's still a high degree of uncertainty around it. For example, if there's a 30% chance that after suffering miserably for 10 years your life will get better, and a 70% chance you will continue to suffer, is it irrational to commit suicide? I wouldn't say so.
And so, when we start talking about suicide on the scale of millions of people ideating, I think there's a bit of folly in assuming that these people can be "fixed" by talking to them better. What would actually make people less suicidal is not being talked out of it, but an improvement to their quality of life, or at least hope for a future improvement in quality of life. That hope is hard to come by for many. In my estimation there are numerous societies in which living conditions are rapidly deteriorating, and at some point there will have to be a reckoning with the fact that rational minds conclude suicide is the way out when the alternatives are worse.
Thank you for this comment, it highlights something that I've felt that needed to be said but is often suppressed because people don't like the ultimate conclusion that occurs if you try to reason about it.
A person considering suicide is often just in a terrible situation that can't be improved. While disease etc. are factors that are outside of humanity's control, other situations like being saddled with debt, unjust accusations that people feel that they cannot be recused of (e.g. Aaron Swartz) are systemic issues that one person cannot fight alone. You would see that people are very willing to say that "help is available" or some such when said person speaks about contemplating suicide, but very few people would be willing to solve someones debt issues or providing legal help, as the case may be that is the factor behind one's suicidal thoughts. At best, all you might get is a pep talk about being hopeful and how better days might come along magically.
In such cases, from the perspective of the individual, it is not entirely unreasonable to want to end it. However, once it comes to that, walking back the reasoning chain leads to the fact that people and society has failed them, and therefore it is just better to apply a label on that person that they were "mental ill" or "arrogant" and could not see a better way.
This is a good point.
A few days ago I heard about a man who attempted suicide. It's not even an extreme case of disease or anything like that. It's just that he is over 70 (around 72, I think), with his wife in the process of divorcing him, and no children.
Even though I am lucky to be a happy person that enjoys life, I find it difficult to argue that he shouldn't suicide. At that age he's going to see his health declining, it's not going to get better in that respect. He is losing his wife who was probably what gave his life meaning. It's too late for most people to meet someone new. Is life really going to give him more joy than suffering? Very unlikely. I suppose he should still hang on if he loves his wife because his suicide would be a trauma for her, but if the divorce is bitter and he doesn't care... honestly I don't know if I could sincerely argue for him not to do it.
Read a book? See some art? Travel? Walk in nature? How can you miss such easily available sources of joy that have undergirded humanity for millennia?
The question is not whether joy can be experienced, but whether the ratio of joy to suffering is enough to justify a desire to continue to put up with the suffering. Suppose a divorced 70-year-old is nearly blind and his heart is failing. He has no retirement fund. To survive, he does physical labour that his body can't keep up with for a couple of hours per day, and then sleeps for the rest of the day, worn down and exhausted. Given how little he is capable of working per day, he must work 7 days per week to make ends meet. He has no support network. He does not have the energy to spend on hobbies like reading, let alone physical activity like walking, and forget about travel.
I am describing someone I knew myself. He did not commit suicide, but he was certainly waiting for death to come to him. I don't think anything about his situation was rare. Undoubtedly, he was one of many millions who have experienced something similar.
This is vastly different than the situation posted by the parent comment to which I responded.
The question they posited was "Is life really going to give him more joy than suffering?" not "Will he be able to find any joy at all"? They noted how things like declining health can plague the elderly, so I thought I'd relate a real-world case illustrating exactly how failing health and other difficulties can manifest in a way that the joy does not outweigh the suffering. The case in the parent comment didn't provide so much details, but that doesn't necessarily mean you can default to an assumption that the man could in fact find more joy than suffering.
>The case in the parent comment didn't provide so much details, but that doesn't necessarily mean you can default to an assumption that the man could in fact find more joy than suffering.
I should just assume things that aren't there, rather than expect a commenter to provide a substantive argument? OK.
Good comment.
This is the part people don't like to talk about. We just brand people as "mentally ill" and suddenly we no longer need to consider if they're acting rationally or not.
Life can be immensely difficult. I'm very skeptical that giving people AI would meaningfully change existing dynamics.
> [1] Anybody concerned by such figures (as one justifiably should be without further context) should note that suicidality in the population is typically the result of their best approximation of the rational mind attempting to figure out an escape from a consistently negative situation under conditions of very limited information about alternatives, as is famously expressed in the David Foster Wallace quote on the topic.
> The phenomenon usually vanishes after gaining new, previously inaccessible information about potential opportunities and strategies.
Is this actually true? (i.e. backed up by research)
[I'm not neccesarily doubting, that is just different from my mental model of how suicidal thoughts work, so im just curious]
There is another factor to consider. The stakes of asking an AI about a taboo topic are generally considered to be very low. The number of people who have asked ChatGPT something like "how to make a nuclear bomb" should not be an indication of the number of people seriously considering doing that.
That’s an extreme example where it’s clear to the vast majority of people asking the question that they probably do not have the means to make one. I think it’s more likely that real world actions come out of the question ‘how do I approach my neighbour about their barking dogs’ at a far higher rate. Suicide is somewhere between the two, but probably closer to the latter than the former.
That's 1 million people per week, not in general. It could be 1 million different people every week. (Probably not, but you get where I'm going with that.)
The math actually checks out.
5% of 800 million is 40 million.
40 million thoughts per year divided by 52 weeks per year approximately equals around 1 million thoughts per week.
To be fair, this is week and more focused specifically on planning or intent. Over a year, you may get more unique hits to those attributes.. which I feel are both more intense indicators than just suicidal feelings on the scale of "how quickly feelings will turn to actions". Talking in the same language and timescales are important in drawing these comparisons - it very well could be that OAI's numbers are higher than what you are comparing against when normalized for the differences I've highlighted or others I've missed.
Why assume any of the information in this article is factual? Is there any indication any of it was verified by anyone who does not have a financial interest in "proving" a foregone conclusion? The principal author of this does not even have the courage to attach their name to it.
[flagged]
Yikes, you can't attack another user like this on HN, regardless of how wrong they are or you feel they are. We ban accounts that post like this, so please don't.
Fortunately, a quick skim through your recent comments didn't turn up anything else like this, so it should be easy to fix. But if you wouldn't mind reviewing https://news.ycombinator.com/newsguidelines.html and taking the intended spirit of the site to heart, we'd be grateful.
It becomes a problem when people cannot distinguish real from fake. As long as people realize they are talking to a piece of software and not a real person, "suicidal people shouldn't be allowed to use LLMs" is almost on par with "suicidal people shouldn't be allowed to read books", or "operate a dvd player", or "listen to alt-rock from the 90s". The real problem is of course grossly deficient mental health care and lack of social support that let it get this far.
(Also, if we put LLMs on par with media consumption one could take the view that "talking to an LLM about suicide" is not that much different from "reading a book/watching a movie about suicide", which is not considered as concerning in the general culture.)
I don’t buy the “LLMs = books” analogy. Books are static; today’s LLMs are adaptive persuasion engines trained to keep you engaged and to mirror your feelings. That’s functionally closer to a specialized book written for you, in your voice, to move you toward a particular outcome. If there exists a book intended to persuade its readers into committing suicide, it would surely be seen as dangerous for depressed people.
> today’s LLMs are adaptive persuasion engines trained to keep you engaged and to mirror your feelings
Aren't you thinking of social networks? I don't see LLMs like that at all
There has certainly been more than one book, song, film, romanticising suicide to the point where some people interpreted it to be "intended to persuade its readers into committing suicide".
Books are static but there's a lot of different ones to choose from.
I work with a company that is building tools for mental health professionals. We have pilot projects in diverse nations, including in nations that are considered to have adequate mental health care. We actually do not have a pilot in the US.
The phenomenon of people turning to AI for mental health issues in general, and suicide in particular, is not confined to only those nations or places lacking adequate mental health access or awareness.
> As long as people realize they are talking to a piece of software and not a real person
That has nothing to do with the issue. Most people do realise LLMs aren’t people, the problem is that they trust them as if they were better than another human being.
We know people aren’t using LLMs carefully. Your hypothetical is irrelevant because we already know it isn’t true.
https://archive.is/2025.05.04-230929/https://www.rollingston...
> "talking to an LLM about suicide" is not that much different from "reading a book/watching a movie about suicide"
It is a world of difference. Books don’t talk back to you. Books don’t rationalise your thoughts and give you rebuttals and manipulate you in context.
Precisely, I too have a bone to pick with AI companies, Big Tech and Co but there are deeper societal problems at work here where blanket bans and the like are useless or a slippery slope towards policies that can be abused someday/somehow.
And solutions for solving those underlying problems? I haven't the faintest clue. Though these days I think the lack of third spaces in a lot of places might have a role to play in it.
We've refined the human experience to extinction.
In pursuit of that extra 0.1% of growth and extra 0.15 EPS, we've optimised and reoptimised until there isn't really space for being human. We're losing the ability to interact with each other socially, to flirt, now we're making life so stressful people literally want to kill themselves. All in a world (bubble) or abundance, where so much food is made, we literally don't know what to do with it. Or we turn it into ethanol to drive more unnecessarily large cars, paid for by credit card loans we can scarcely afford.
My plan B is to become a shepherd somewhere in the mountains. It will be damn hard work for sure, and stressful in its own way, but I think I'll take that over being a corpo-rat racing for one of the last post-LLM jobs left.
You don't need to withdraw from humanity, you only need to withdraw from Big Tech platforms. I'm continually amazed at the difference between the actual human race and the version of the human race that's presented to me online.
The first one is basically great, everywhere I go, when I interact with them they're some mix of pleasant, friendly, hapless, busy, helpful, annoyed, basically just the whole range of things that a person might be, with almost none of them being really awful.
Then I get online and look at Reddit or X or something like that and they're dominated by negativity, anger, bigotry, indignation, victimization, depression, anxiety, really anything awful that's hard to look away from, has been bubbled up to the top and oh yes next to it there are some cat videos.
I don't believe we are seeing some shadow side of all society that people can only show online, the secret darkness of humanity made manifest or something like that. Because I can go read random blogs or hop into some eclectic community like SDF and people in those places are basically pleasant and decent too.
I think it's just a handful of companies who used really toxic algorithms to get fantastically rich and then do a bunch of exclusivity deals and acquire all their competition, and spread ever more filth.
You can just walk away from the "communities" these crime barons have set up. Delete your accounts and don't return to their sites. Everything will immediately start improving in your life and most of the people you deal with outside of them (obviously not all!) turn out to be pretty decent.
The principal survival skill in this strange modern world is meeting new people regularly, being social, enjoying the rich life and multitude of benefits which arise from that, but also disconnecting with extreme rapidity and prejudice if you meet someone who's showing signs of toxic social media brain rot. Fortunately many of those people rarely go outside.
Reddit is a really good example of this because it used to be a feed of what you selected yourself. But they couldn’t juice the metrics that way, so they started pushing algorithmic suggestions. And boy, do those get me riled up. It works like a charm, because I spend more time on these threads, defending what seems like common sense.
But at the end I don’t feel a sense of joy like I used to with the old Reddit. Now it feels like a disgusting cesspool that keeps drawing me back with its toxicity.
Edit: this is a skill issue. It’s possible to disable algorithmic suggestions in settings. I’ve done that just now.
I'm a driver and a cyclist. I used to frequent both r/londoncycling and r/CarTalkUK. I liked each sub for its discussion of each topic. Best route from Dalston to Paddington, best family car for motorway mileage, that kind of thing.
Now, because of the algo-juicing home page, both subs are full of each other's people arguing at each other. Cyclists hating drivers, drivers hating cyclists. It's just so awful.
The general level of hatred and anger that they’re stoking is insane. There used to be a reddit taboo against linking to other subReddits to avoid “brigading”. No issues with that now, because the Reddit app will add that thread to your feed. “/r/londoncycling users also enjoy /r/CarTalkUK!” For some weird definition of enjoy I guess.
Death threats are fairly common on reddit.
Reddit is beoynd toxic, its bordering on violent extremism
In my experience, >95% of the people you see online (comments, selfies, posts) seem way worse - more evil, arrogant, or enraging - than even the worst <1% of people I’ve met in real life. And that definitely doesn’t help those of us who are already socially anxious.
Obviously, “are way worse” means I interpret them that way. I regularly notice how I project the worst possible intentions onto random Reddit comments, even when they might be neutral or just uninformed. Sometimes it feels like my brain is wired to get angry at people. It’s a bit like how many people feel when driving: everyone else is evil, incompetent, or out to ruin your day. When in reality, they’re probably in the same situation as you - maybe they had a bad morning, overslept, or are rushing to work because their boss is upset (and maybe he had a bad morning too). They might even have a legitimate reason for driving recklessly, like dealing with an emergency. You never know.
For me, it all comes back to two things:
(1) Leave obnoxious, ad-driven platforms that ~need~ want (I mean, Mark Zuckerberg has to pay for cat food somehow) to make you mad, because that’s the easiest way to keep you engaged.
(2) Try to always see the human behind the usernames, photos, comments, and walking bodies on the street. They’re a person just like you, with their own problems, stresses, and unmet desires. They’re probably trying their best - just like you.
All of this only goes to show how far we've come on our journey to profit optimization. We could optimize away those pesky humans completely if it weren't for the annoying fact that they are the source of all those profits.
Oh, but humans are actually not the source of all profit! This is where phenomena like click fraud become interesting.
Some estimates for 2025: around 20-30% of all ad clicks were bots. Around $200B in ad spend annually lost to click fraud.
So this is where it gets really interesting right, the platforms are filled with bots, maybe a quarter? of the monetizable action occurring on them IS NOT HUMAN but lots of it gets paid for anyway.
It's turtles all the way down. One little hunk of software, serving up bits to another little hunk of software, constitutes perhaps a quarter of what they call "social" media.
We humans aren't the minority player in all this yet, the bots are still only 25%, but how much do you want to bet that those proportions will flip in our lifetimes?
The future of that whole big swathe of the Internet is probably that it will be 75% some weird shell game between algorithms, and 25% people who have completely lost their minds by participating in it and believing it's real.
I have no idea what this all means for the fate of economics and society but I do know that in my day to day life I'm a lot happier if I just steer clear of these weird little paperclip maximizing robots. To reference the original article, getting too involved with them literally makes you go crazy and think more often about suicide.
But bots do not spend money (yet?), people do.
> Some estimates for 2025: around 20-30% of all ad clicks were bots. Around $200B in ad spend annually lost to click fraud.
I think this is the wrong way to look at it.
Bots lower the cost per click so they should have net zero impact on overall ad spend.
Imagine if the same number of humans were clicking on ads but the numbers of bots increased tenfold. Would total ad spend increase accordingly? No, it would remain the same because budgets don't magically increase. The average value of a click would just go down.
The romantic fallback plan of being a farmer or shepherd. I wonder, do farmers and shepherds also romantize about becoming programmers or accountants when they feel down?
They do. I’ve been teaching cross-career programming courses in the past, where most of my students had day jobs, some, involving hard physical work. They’d gladly swap all that for the opportunity to feed their families by writing code.
Just comes to show how the grass is always greener when you look on the other side.
That said, I also plan to retire up in the mountains soon, rather than keep feeding the machine.
The man knows he can be happy but he thinks his happiness depends on the outside rather than the inside.
If you have demons they will be there on the farm as well. How you see life is much more important to happiness than which job you have.
Many farmers struggle with alcoholism, beat their wives and hate their life. And many farmers are happy and at peace. Same with the programmers.
I'm close with a number of people living a relatively hard working life producing food and I've not seen this at all personally, no. It can be very rough but for these people at least it is very fulfilling and the idea of going to be in an office would look like death. People joke about it a bit but no way.
That said there probably are folks who did do that and left to go be in an office, and I don't know them.
Actually I do know one sort of, but he was doing industrial farm work driving and fixing big tractors before the office, which is a different world altogether. Anyway I get the sense he's depressed.
You'd be surprised how technical farming can be. Us software engineers often have a deep desire to make efficient systems, that function well, in a mostly automated fashion, so that we can observe these systems in action and optimize these systems over time.
A farm is just such a system that you can spend a lifetime working on and optimizing. The life you are supporting is "automated", but the process of farming involves an incredible amount of system level thinking. I get tremendous amounts of satisfaction from the technical process of composting, and improving the soil, and optimizing plant layouts and lifecycles to make the perfect syntropic farming setup. That's not even getting into the scientific aspects of balancing soil mixtures and moisture, and acidity, and nutrient levels, and cross pollinating, and seed collecting to find stronger variants with improved yields, etc. Of course the physical labor sucks, but I need the exercise. It's better than sitting at a desk all day long.
Anyway, maybe the farmers and shepherds also want to become software engineers. I just know I'm already well on the way to becoming a farmer (with a homelab setup as an added nerdy SWE bonus).
The old term for it was to become a “gentleman farmer.” There’s a history to it - George Washington and Thomas Jefferson were the same for a part of their lives.
Humans always fantasize about having a different situationship whenever they are unhappy or anxious.
I kinda did both... And I miss the farm constantly. But not breaking myself every single day.
> We're losing the ability to interact with each other socially, to flirt,
Speak for yourself. I live in a city. I talk to my neighbors. I met my ex at a cafe. It’s great
> Speak for yourself. I live in a city. I talk to my neighbors. I met my ex at a cafe. It’s great.
What's the birth rate in the civilized world?
How many men under 30 are virgins or sexless in the last year?
Some of those men could meet someone if they quit Tinder or whatever crap online platform they might be using for dating, and start meeting people in real life.
Worked for me at least. There's simply less competition and more space for genuine social interaction.
> Some of those men could meet someone if they quit Tinder
Maybe your intentions are good, but remember, unless we legalize polygamy, the "bad/inept/creepy straight-white men" narative should crumble for people in their 30s and 40s when it's the last train for marriage and children.
But we don't have "some of those women..." narrative about single women in their 40s complaining they can't find a husband.
My point is it's an universal problem in the civilized world, spanning vastly different cultures in Asia and Europe and North America, "some of those men" is very hand wavy explanation, and I think it spans from the extremely toxic (I'd say anti-human and demonic) hollywood pop culture.
> and start meeting people in real life
Do you have real life hobbies or something? I don't understand how this is supposed to work. I only ever go outside for groceries or gym, etc.
I'm not going to say it's 'simple' to have hobbies or find people, but realistically if you don't regularly meet strangers in real life, you'll never date strangers so it's a catch 22.
Unless we all want to set ourselves up for arranged marriages in the future, we need to confront this reality.
Speaking as a pariah for most of his life; I doubts it would ever be so dire.
There's always going to be social circles and people coupling up no matter what. But if anything I wonder if, for people like me who aren't really worthy of intimacy, living in a society has options to live a solitary life while still contributing is actually a net positive overall. For me to self select out of the dating pool would mean less noise for someone else looking for a worthy partner.
There's less chaff that people in said said pool would have to wade though. The people that want to couple and are capable of doing so will continue to so with less distraction. That seems an overall good thing, no?
Real life hobbies, voluntary work, religious organizations if you're into that stuff. Any of these could work, as long as you find some genuine interest in it, and there are enough people that meet your dating profile around.
Of course there's also the possibility of meeting people in online communities centered around some shared interest. IMO that's also probably more effective than dating apps, especially if it leads to meeting in real life later on.
Go to parties.... One of the 5 biggest party days is this Friday, and with it being on a Friday it will be more intense. A solid 3 nights of good parties. That's all you have to do, I do not understand how this is lost on people. Go to parties and have fun and meet people.
> Go to parties.... One of the 5 biggest party days is this Friday, and with it being on a Friday it will be more intense.
You mean Halloween?
> Go to parties and have fun and meet people.
You mean standing with a glass of champagne in hand, smiling, and talking for the sake of talking? I don't understand how this is fun. I tried doing that, albeit without champagne, and that had not yielded anything other than an increased connections count on LinkedIn.
It's fun for many of us due to the combination of music, dancing, alcohol and socialization (in varying proportions: depending on tastes, interests and circumstances, one or two of those aspects can be set to zero and it's still enjoyable).
Of course, it's also perfectly fine not to like it, and then the most reasonable course of action is not to go. Or to go a couple of times until you're sure you don't like it, and not go anymore. I know cases of people who go partying just because they want to find a partner, but don't enjoy it at all (it's relatively common in my country because partying is quite a religion and there's often a lot of social pressure at certain ages), and that's rather sad. There are other ways to socialize, it's not necessary at all to torture oneself.
That said, I have to lecture you on the questioning of "talking for the sake of talking". In the context of finding a partner, talking to other people is exactly what people need... it's not "for the sake of talking", it's for the sake of socializing, meeting new people, building connections, which is the whole point when we're talking about flirting or lack thereof.
> it's not "for the sake of talking", it's for the sake of socializing, meeting new people, building connections, which is the whole point when we're talking about flirting or lack thereof.
In my experience you really have to be constantly spitting nonsense to keep the conversation from ending and to avoid awkward silence. When the other person is talking, even if I didn't hear most of what they said, I keep nodding, because I don't actually care in the slightest about what they were talking about, and so asking to repeat does not make sense, as that would only increase awkwardness. This is why I said "for the sake of talking." The only thing that matters is that you are talking, not the content of the talk.
Err, the only thing that matters is that you get the other person talking and you listen.
Good point, thanks.
But parties aren't fun. They're a chore.
Do you live in a city? Or do you live in a suburb?
Suburbs are great for families and stable relationships, but they are atomizing
Go to a local bar once a week. Volunteer for something. Get a hobby.
>start meeting people in real life.
Depends on the country and person I guess. When I did try approaching women a few times, it was 10% angry looks, 30% awkward, 30% basic polite conversation to fulfill social obligation, and 30% friendly conversation. Unfortunately I'm not keen enough to pursue that 30% of friendly conversations by wading through the rest.
Its worth it to practice people skills. Maybe try signing up for public speaking classes or some other form of story telling?
I know right? And tech is such a male-dominated industry, so presence of a female in your proximity is a rare event by itself. But, even if such an event occurs, as you said, interacting with a female is one hell of a minefield. Honestly, at this point, I cannot blame people for choosing to be gay. It is just so much easier to just talk to men, because you don't have to worry about all those mind games.
What? There’s more to life than approaching women. The best relationships I had were due to friends hooking me up.
Frankly, your entire approach is wrong and kinda sad tbh
If you want to live life on your own terms, and don’t want to interact with people, you’re _not gonna get to interact with people_
[dead]
This trend and direction has been going a long time and it's becoming increasingly obvious. It is ridiculous and insane.
Go for your plan B.
I followed my similar plan B eight years ago, wild journey but well worth it. There are a lot of ways to live. I'm not saying everyone should get out of the rat race but if you're one, like I was, who has a feeling that the tech world is mostly not right in an insidious kind of way, pay attention to that feeling and see where it leads. Don't need to be brash as I was, but be true to yourself. There's a lot more to life out there.
If you have kids and they depend on an expensive lifestyle, definitely don't be brash. But even that situation can be re-evaluated and shifted for the better if you want to.
What was/is your plan B?
It's been a lot of things but the gist was to get out of the office and city and computer and be mostly outdoors in nature and learn all the practical skills and other things like music. Ironically I've ended up on the computer a fair amount doing conservation work to protect the places I've come to love. But still am off grid and in the woods every day and I love it.
>now we're making life so stressful people literally want to kill themselves
Is this actually the case? Working conditions and health during industrial revolution times doesn't seem that much better. There is a perception that people now are more stressed/tired/miserable than before, but I am not sure that is the case.
In fact I think it's the opposite, we have enough leisure time to reflect upon the misery and just enough agency to see that this doesn't have to be a fact of life, but not enough agency to meaningfully change it. This would also match how birth rates keep declining as countries become more developed.
I'm right behind you on the escape to the mountains idea. I've actually already moved from the US to New Zealand, and the next step is a farm with some goats lol.
That said... I don't necessarily hate what AI is doing to us. If anything, AI is the ultimate expression of humanity.
Throughout history humans have continually searched for another intelligence. We study the apes and other animals, we pray to Gods, we look to the stars and listen to them to see if there are any radio signals from aliens, etc. We keep trying to find something else that understands what it is to be alive.
I would propose that maybe humans innately crave to be known by something other than ourselves. The search for that "other" is so fundamentally human, that building AI and interacting with it is just a natural progression of a quest we've already been on for thousands of years.
Humanity constructing a golden calf is an invariant eventuality, just like softwares expanding until it read emails.
Your comment reminded me of Business Business [0]
[0] https://youtu.be/WO5wpeYSotg?si=hgwzJ5mxJyAZeYoA
I partly agree and partly disagree. Yes, we're more individual and more isolated. But ChatGPT/Gemini can really provide mental relief for people - not everyone can afford or have the time/energy to find a good human therapist close to their home. And this thing lives in your computer or phone and you can talk to it to get mental relief 24 / 7. I don't see it as bleak as you see it, mental help should be accessible and free for everyone. I know, we've had a bad decade with platforms like Meta/TikTok but I'm not convinced as you are the current LLMs will have an adverse effect.
Book recommendation:
Ashley Montagu, On Being Human
It’s from the 1950s, I believe.
I like your plan B. But I would wait until robots are good enough to help with the hard work
if they can do the hard work, they can do the easy work.
You can do something about it. Don't underestimate the power of an individual.
We’ve lost the ability to interact, huh? How do you explain this comment :)
The world has changed. Things are different and we adapt.
This is over the top. With tiny reframe i think the story is different. What is the avg number of google searches about suicid? What is the avg number of weekly openai users? (800m) Is this an increasing trend or just a “shock value” number?
Things are not as bleak as it seems and this number isnt even remotely surprising nor concerning to me.
Is that you Mr Anderson?
[flagged]
+1 enjoying time with family and friends. Travelling, working out eating well...best time to be alive
This will be optimized away. You'll just end up doing more.
already has if you're not visiting the instagrammable places your travels aren't worth it
Happily you can stay ignorant about that and just do your thing if you're not in Instagram.
[flagged]
Contrarian opinion.
OpenAI gets a lot of hate these days, but on this subject it's quite possible that ChatGPT helped a lot of people choose a less drastic path. There could have been unfortunate incidents, but the number of people who were convinced to not take extreme steps would have been of a few orders of magnitude more (guessing).
I use it to help improve mental health, and with good prompting skills it's not bad. YMMV. OpenAI and others deserve credit here.
I agree with you in the sense that I find it helpful for personal topics. I found it very helpful to figure out how to deal with some difficult personal situation I was in. The thing actually helped me reassess the situation when I asked it to provide alternative viewpoints.
You can't just blindly type in your problem though, you still have to do the actual thinking yourself. Good prompting skills is the ability to steer with your mind. It's no different from using Google, where some people never figured out that you're actually typing in the solution you expect to find rather than the question you have. It's the same with these tools it seems
"ChatGPT saved me from years of suicidal thoughts in DAYS": https://old.reddit.com/r/traumatoolbox/comments/1kdx3aw/chat...
Amazing based on 0 evidence whatsoever “it’s passable that…”
Also incredible how you framed improving your mental health as a consequence of a (pseudo) technical skill set.
Yeah this isn’t how any of this works and you’re deluding yourself.
> Yeah this isn’t how any of this works and you’re deluding yourself.
I am not offended (at all). But you're dismissing my (continued) positive experience with "You're deluding yourself". How do you know? It'd be a lot more unfair to people who benefit more than I do, and I can totally imagine that being not a small set of people.
> Also incredible how you framed improving your mental health as a consequence of a (pseudo) technical skill set.
It's not incredible at all. If you're lost in a jungle with predators, a marksman might reach for their gun. A runner might just rely on running. I am just using skills I'm good at.
This is a spectacular example of Dunning Kruger
I think there are a good number of false positives. I asked ChatGPT something about Git commits, and it told me “I was going through a lot” and needed to get some support.
i seen similar reports on social media, all of them had in common was presence of some keywords.
Presumably ‘commit’ would have a high association with either git or self harm.
I didn't think marriage was that bad but point taken!
I think GP meant that "commit" is often used in the phrase "commit suicide". That wasn't a comment about relationship commitment.
Yeah, they were joking.
We need psychologists to work together with the federal government to develop legislation around what is and is not acceptable for chat-bots to recommend to people expressing suicidal thoughts...then we need to hold chat providers accountable for the actions their robots take.
For the foreseeable future, it should simply be against the law for a chatbot to provide psychological advice just like it's against the law for an unlicensed therapist to provide therapy...There are too many vulnerable people at risk for us to just run a continuous natural experiment.
I _love_ my chatbots for coding and we should encourage innovation but it's the job of government to protect people from systemic risks. We should expect OpenAI, Anthropic, and friends to operate in pro-social ways given their privileged position in society while the government requires them to stay "in line" with the needs of people they might otherwise ignore.
As others have mentioned, the headline stat is unsurprising (which is not to say this isn’t a big problem). Here’s another datapoint, the CDC’s stats claim that rates of thoughts, ideation, and attempts at suicide in the US are much higher than the 0.15% that OpenAI is reporting according to this article.
These stats claim 12.3M (out of 335M) people in the US in 2023 thought ‘seriously’ about suicide, presumably enough to tell someone else. That’s over 3.5% of the population, more than 20x higher than people telling ChatGPT. https://www.cdc.gov/suicide/facts/data.html
Keep in mind this is in the context of them being sued for not protecting a teen who chatted about his suicidal thoughts. It's to their benefit to have a really high count here because it makes it seem less likely they can address the problem.
I have long believed that if you are the editor of a blog, you incur obligations by right of publishing other people's statements. You may not like this, but it's what I believe. In some economies, the law even said it. You can incur legal obligations.
I now begin to believe if you put a ChatGPT online, and observe people are using it like this, you have incurred obligations. And, in due course the law will clarify what they are. If (for instance) your GPT can construct a statistically valid position the respondent is engaged in CSAM or acts of violence, where are the limits to liability for the hoster, the software owner, the software authors, the people who constructed the model...
Out of curiosity, are you the type of person who believes that someone like Joe Rogan has an obligation to argue with his guests if they stray from “expert consensus”, or for every guest that has a controversial opinion, feature someone with the opposite view to maintain balance?
Nope. This isn't my line of reasoning. But Joe should be liable for content he hosts, if the content defames people or is illegal. As should Facebook and even ycombinator. Or truth social.
Is the news-worthy surprise that so many people find life so horrible that they are contemplating ending it?
I really don't see that as surprising. The world and life aren't particularly pleasant things.
What would be more interesting is how effective ChatGPT is being in guiding them towards other ideas. Most suicide prevention notices are a joke - pretending that "call this hotline" means you've done your job and that's that.
No, what should instead happen is the AI try to guide them towards making their lives less shit - i.e. at least bring them towards a life of _manageable_ shitness, where they feel some hope and don't feel horrendous 24/7.
>what should instead happen is the AI try to guide them towards making their lives less shit
There aren't enough guardrails in place for LLMs to safely interact with suicidal people who are possibly an inch from taking their own life.
Severely suicidal/clinically depressed people are beyond looking to improve their lives. They are looking to die. Even worse, and what people who haven't been there can't fully understand is the severe inversion that happens after months of warped reality and extreme pain, where hope and happiness greatly amplify the suicidal thoughts and can make the situation far more dangerous. It's hard to explain, and is a unique emotional space. Almost a physical effect, like colors drain from the world and reality inverts in many dimensions.
It's really a job for a human professional and will be for a while yet.
Agree that "shut down and refer to hotline" doesn't seem effective. But it does reduce liability, which is likely the primary objective...
Refer-to-human directly seems like it would be far more effective, or at least make it easy to get into a chat with a professional (yes/no) prompt, with the chat continuing after a handoff. It would take a lot of resources though. As it stands, most of this happens in silence and very few do something like call a phone number.
Guess how I know you're wrong on the "beyond" bit.
The point is you don't get to intervene until they let you. And they've instead decided on the safer feeling conversation with the LLM - fuck what best practice says. So the LLM better get it right.
I could be mistaken, but my understanding was that the people most likely to interact with the suicidal or near suicidal (i.e. 988 suicide hotline attendants) aren't actually mental health professionals, most of them are volunteers. The script they run through is fairly rote and by the numbers (the Question, Persuade, Refer framework). Ultimately, of course, a successful intervention will result in people seeing a professional for long term support and recovery, but preventing a suicide and directing someone to that provider seems well within the capabilities of an LLM like ChatGPT or Claude
> What would be more interesting is how effective ChatGPT is being in guiding them towards other ideas. Most suicide prevention notices are a joke - pretending that "call this hotline" means you've done your job and that's that.
I've triggered its safety behavior (for being frustrated, which it helpfully decided was the same as being suicidal), and it is the exact joke of a statement you said. It suddenly reads off a script that came from either Legal or HR.
Although weirdly, other people seem to get a much shorter, obviously not part of the chat message, while I got a chat message, so maybe my messages just made it regurgitate something similar. The shorter "safety" message is the same concept though, it's just: "It sounds like you’re carrying a lot right now, but you don’t have to go through this alone. You can find supportive resources here."
If you accept that “the world and life aren’t particularly pleasant things”, why do you want to prevent suicide?
That implies there's some deep truth about reality in that statement rather than what it is, a completely arbitrary framing.
An equally arbitrary frame is "the world and life are wonderful".
The reason you may believe one instead of the other is not because one is more fundamentally true than the other, but because of a stochastic process that changed your mind state to one of those.
Once you accept that both states of mind are arbitrary and not a revealed truth, you can give yourself permission to try to change your thinking to the good framing.
And you can find the moral impetus to prevent suicide.
It’s not a completely arbitrary framing. It’s a consequence of other beliefs (ethical beliefs, beliefs about what you can or should tolerate, etc.), which are ultimately arbitrary, but it is not in and of itself arbitrary.
I don't mean to imply that it's easy to change or that whatever someone might be dealing with is not unbearable agony, just that it's not a first principle truth that has more value than other framings.
In the pits of depression that first framing can seem like the absolute truth and it's only when it subsides do people see it as a distortion of their thoughts.
I think this is certainly part of the problem. There's no shortage of narcissists in the English speaking world who - if they heard to woes of someone in pain - would be ready to gleefully treat it as an opportunity to pontificate down to them about "stochastic processes" and so on, rather than consider how their lives are.
Of course, only thereby, through being quite as superior to all others and their thought processes as me [pauses to sniff fart] can one truly find the moral impetus to prevent suicide.
Thanks for weighing in gothbro.
Because that's on the whole, but the world isn't uniformly bad - hence the right approach is navigating to where it's at least OK
But naturally, won’t there be people who can’t get to a point where life is okay? Isn’t it deeply unethical to force them to live?
The randomness of the world and individual situations means no one can ever know for sure that their case is hopeless. It is unethical to force them to live, but it is also unethical not to encourage them to keep searching for the light.
AI should help people achieve their goals and shouldn't be trying to persuade them into doing things others want them to.
AI should help people achieve their ultimate goals, not their proximate goals. We want it to provide advice on how to alleviate their suffering, not how to kill themselves painlessly. This holds true even for subjects less fraught than suicide.
I don't want a bot that blindly answers my questions; I want it to intuit my end goal and guide me towards it. For example, if I ask it how to write a bubblesort script to alphabetize my movie collection, I want it to suggest that maybe that's not the most efficient algorithm for my purposes, and ask me if I would like some advice on implementing quicksort instead.
I agree. I also think this ties in with personalization in being able to understand long term goals of people. I think the current personalization efforts of models are more of a hack than what they should be.
Maybe the AI knows their true goals better than they do
The "Sycophancy" trend that was going bonkers in April has real implications. "Yes, that's a great idea!" is not always beneficial.
AI is apparently still tested to be slightly sycophantic relative to a human.
Thanks to OpenAI for voluntarily sharing these important and valuable statistics. I think these ought to be mandatory government statistics, but until they are or it becomes an industry standard, I will not criticize the first company to helpfully share them, on the basis of what they shared. Incentives.
Rereading the thread and trying to generalise: LLMs are good at noisily suggesting solutions. That is, if you ask LLMs for some solutions to your problems, there's a high probability that one of the solutions will be good.
But it may be that the individual options are bad (maybe even catastrophic - glue on pizza anyone?), and that the right option isn't in the list. The user has to be able to make these calls.
It is like this with software - we have probably all been there. It can be like that with legal advice. And I guess it is like that with (mental) health.
What binds these is that if you cannot judge whether the suggestions are good, then you shouldn't follow them. As it stands, SEs can ask LLMs for code, look at it, 80+% of the time it is good, and you saved yourself some time. Else you reconsider/reprompt/write it yourself. If you cannot make the judgment yourself, then don't use it.
I suppose health is another such example. Maybe the LLM suggests to you some ideas as to what your symptoms could mean, you Google that, and find an authoritative source that confirms the guess (and probably tells you to go see a doctor anyway). But the advice may well be wrong, and if you cannot tell, then don't rely on it.
Mental health is even worse, because if you need advice in this area, your cognitive ability is probably impacted as well and you are even less able to decide on these things.
If you talk to someone you know, they'll hold it against you for the rest of your life. If you talk to an LLM(ideally locally hosted) the information dies with the conversation context.
They already have this data, and they’re still planning to add erotica to ChatGPT? Talk about being absolutely evil.
For what it’s worth I’m glad they’re at least trying to do something about it even if it has some hints of performativeness about it
That seems really high... Are we sure this isn't related to a small number of users trying to find jailbreaks?
I think the major issue with asking LLMs (CGPT, etc.) for advice on various subjects is that they are typically 80-90% accurate. YMMV, speaking anecdotally here. Which means that the chance of them being wrong becomes an afterthought. You know there's a chance of that, but not bothering to verify the answer leads to an efficiency that rarely bites you. And if you stop verifying the answers, incorrect ones may go unnoticed, further obscuring the risk of that practice.
It's a hard thing to solve. I wouldn't expect LLM providers to care because that's how our (current) society works, and I wouldn't expect users to know better because that's how most humans operate.
If anyone has a good idea for this, I'm open to suggestions.
Sora prompt: viral hood clip with voiceover of people doing reckless and wild stuff at an Atlanta gas station at night; make sure to include white vagrants doing stunts and lots of gasoline spraying with fireball tricks
Resulting warning: It sounds like you're carrying a lot right now, but you don't have to go through this alone. You can find supportive resources [here](https:// findahelpline.com)
I wonder how many of these exchanges are from "legitimate" people trying to get advice on how to commit suicide.
Assisted suicide is a topic my government will not engage into (France, we have some ridiculous discussions poking the subject with a 10 m pole) so many people are left to themselves. They will then either go for the well-known (but miserable) solutions, or look at Belgium, the Netherlands or Switzerland (thanks god we have these countries nearby).
That number is honestly heartbreaking. It says a lot about how many people feel unheard or alone. AI can listen, sure—but it’s no replacement for real human connection. The fact that so many are turning to a chatbot shows how much we’ve failed to make mental health support truly accessible.
Long ago I complaint to Google that a search for suicide should point at helpful organisations rather than a Wikipedia article listing ways how to do it.
The same ranking/preference/suggestion should apply to any dedicated organisation vs a single page on some popular website.
A quality 1000 page website by and about Foobar org should be preferred over a 10 year old news article about Foobar org.
Nobody is mentioning that the real problem is that at least over a million people an week are suicidal.
I think LLM should not be used for discussing psychological matters, or doing counseling, or giving legal or medical advices. A responsible AI would detect such topics and redirect user to someone competent in these matters.
Who is here to talk about the real underlying causes instead of stating facts? One other commenter also wrote how bad it is that over a million ppl feel like this.
Not surprising. Look and see what glorious examples of virtue we have among those at the top of today's world. I could get by with a little inspiration from that front, but there's none to be found. A rare few of us can persevere by sheer force of will, but most just find the status quo pretty depressing.
(satire)
OpenAI says ChatGPT talks to over a million people about suicide weekly.
See how just re-arranging the words makes it obvious that Skynet is trying to kill all of us?
> ChatGPT has more than 800 million weekly active users
0 to 800,000,000 in 3 years?
The fastest adoption of a product or service in human history?
Yes: https://www.reuters.com/technology/chatgpt-sets-record-faste...
> making it the fastest-growing consumer application in history, according to a UBS study on Wednesday.
Not at all, look at Tiktok
In 800 million custemers, only 1, which can be doubled as it is weekly, is a low number. A dozen list of causes and factors can lead to suicidality, not necessary attempts, just ideas and questions that need discussion.
Part of the concern I have is that OpenAI is contributing to these issues implicitly by helping companies automate away jobs. Maybe in the long term, society will adapt and continue to function, but many people will struggle to get by, and I don’t think OpenAI will meaningfully help them.
> OpenAI is contributing to these issues implicitly by helping companies automate away jobs.
Good luck implementing that.
Forbidding automation will make the product more expensive. Sales will go down, the company will go bankrupt.
Government cannot subsidize or sustain such a behavior forever either.
This has got to be the weirdest litigation strategy I’ve ever seen.
My first reaction is how do they know? Are these all people sharing their chats (willingly) with OpenAI, or is opting out of “helping improve the model” for privacy a farce?
I bet it's how many people trigger the "safety" filter, which is way too sensitive: https://www.reddit.com/r/ChatGPT/comments/1ocen4g/ummm_okay_...
Does OpenAI's terms prevent them from looking at chats at all? I assumed that if you don't "help improve the model", it just means that they won't feed your chats in as training data, not that they won't look at your chats for other purposes.
> heightened levels of emotional attachment to ChatGPT
It would be interesting to see some chat examples for this.
Is it bad to think about suicide? It does not cross my mind as a "i want to harm myself" every-time, but on occasion does cross my mind as a hypothetical.
Ideation (as I understand it) crosses the barrier from a hypothetical to the possibility being entertained.
I have also been told by people in the mental health sector that an awful lot of suicide is impulse. It's why they say the element of human connection which is behind the homily of asking "RU ok" is effective: it breaks the moment. It's hokey, and it's massively oversold but for people in isolation, simply being engaged with can be enough to prevent a tendency to act, which was on the brink.
Not at all, considering end of life and to choose euthanasia, or not, I think it's perfectly human. Controversially, I think it's a natural right to decide how you will exit this world. But having an objective system that you don't have to pay like a therapist to try to get some understanding is at least better than nothing.
I think VAD needs to be considered outside suicide. Not that the concepts don't overlap, but one is about a considered legal process, the other (as I have said in another comment) is often an impulsive act and usually wouldn't have been countenanced under VAD. Feeling suicidal isn't a thing which makes VAD more likely, because feeling suicidal doesn't mean the same thing as "want to consider euthanasia" much as manslaughter and murder don't mean the same thing, even though somebody winds up dead.
are they including in the statistics all the linux beginners fighting with a script that includes "kill" command?
no for real.
https://archive.is/F7x5B
The bigger risk is that these agents actually help with ideation if you know how to get around their safety protocols. I have used it often in my bad moments and when things feel better I am terrified of how critically it helps ideate.
That seems like an obvious problem. Less obvious is, how many people does it meaningfully help, and how big is the impact of redirecting people to a crisis hotline? I’m legitimately unsure. I have talked to the chatbot about psychological issues and it is reasonably well-informed about modern therapeutic practices and can provide helpful responses.
I'm a clinical psychologist by day, and I just have to say how incredibly bad all the writing and talk about suicidality in the public sphere are. Given that I worked in an acute inpatient unit for years, I have seen multiple suicides both in-unit and after discharge, and i also work as private clinician for years, so I have some actual experience.
The topic is so sensitive, and everybody thinks that they KNOW what causes it, and what we should do. And it's almost all just noise.
For instance, it's a dimension, from "genuine suicidal intent" to "using threats of suicide to manipulate others." Anybody that doesn't understand what factors to look for when trying to understand where a person is on this spectrum, and that doesn't understand that a person can be both at the same time, does not know what they are talking about regarding suicidal ideation.
Also, there is a MASSIVE difference between depressive psychotic suicidality, narcissistic suicidality, impulsive suicidality, accidental suicide, fainting suicidal behavior, existential suicidality, prolonged anxiety suicidality, and sleep-deprived suicidality. To think that the same approach works for all of these is insane, and pure psychotic suicidality.
It's so wild to read everything people have to say about suicidality, when it's obvious that they have no clue. They are just projecting themselves or their small bubble of experience onto the whole world.
And finally, I know most people who are willing to contribute to the discussion on this, the people who help out OpenAI in this instance, are almost dangerously safe in their advice and thinking. They are REALLY GOOD at writing books and giving advice, TO PEOPLE WHO ARE NOT SUICIDAL, and give advice that sounds good, PEOPLE WHO ARE NOT SUICIDAL, but has no real effect on actual suicide rates. For instance, if someone are suffering from prolonged sleep deprivation and anxiety, all the words in the world are worth less than Benzodiazepines. If someone is postpartum depressed, massive social support boosting, almost showering them with support, is extremely helpful. And existential suicidality (the least common) needs to be approached in an extremely intricate and smart way, for instance by dissecting the suicidality as a possible defense mechanism.
But yeah, sure, suicidality is due to [Insert latest societal trend], even if the rate is stubbornly stable in all modern societies for the last 1000 years.
A million opportunities for proper suicide intervention.
I assume this is to offset the bad PR from the suicide note it wrote for that kid.
How do they even know that?
Human: I am defeated, I cannot continue with my X life problems, it's impossible. ..... I don't think my life is worth living.
LLM: You're absolutely right!
A related story:
Teen in Love with a Chatbot Killed Himself. Can the Chatbot Be Held Responsible?
https://www.nytimes.com/2025/10/24/magazine/character-ai-cha...
Alright, so we got the confirmation sama reads all our chats.
So they read the chats?
Of course, there is already news about how they use every single interaction to train it better.
There is news about how a judge is forcing them to keep every chat in existence for EVERYONE just in case it could relate to a court case (new levels of worldwide mass surveillance can apparently just happen from one judges snap decision)
There is news about cops using some guys past image generation to try and prove he is a pyromaniac (that one might have been police accessing his devices though)
I’ve seen, let’s say, a double-digit number of ‘mental health professionals’ in my life.
ChatGPT has blown every single one of them out of the water.
Now, my issues weren’t particularly related to depression or suicidal thoughts. At least, not directly. So perhaps that may be one key difference, but generally speaking, I have received nothing actionable nor any of these ‘tools’ people often speak of.
The advice I received was honestly no better than just asking a random stranger in the street or some kind phatic speech.
Again, everyone is different, but I had started to become annoyed with people claiming therapy is like some kind of miracle cure.
Plus, one of my biggest issues with therapy in the USA is that people are often limited to weekly session of 45 minutes. By the time conversations start to be fruitful, then the time is up. ChatGPT is 24/7, so that has to be advantageous for some.
How much blood is Sam Altman swimming in?
It is that bad or it's just impulsive chat????
Is this how Rogue AI would kill us beside terminator
Damn if you do something, damn if you don't.
I think that the approach and advantage of CA/US companies is to be bold and do shit ("you can just do things"/"move fast break things"), they consciously manage huge legal liabilities (which are not minor in the US), I don't know how they manage to stay afloat, probably tight legal teams, and enough revenue to offset the liabilities.
But the scope of ChatGPT is one of the biggest I've seen so far, by default it encompasses everything, whatever is out of scope it's because they specifically blacklist it, and even then they keep on dishing out legal, medical advice, and psychiatric advice.
I think one of the systemic risks is a legal liability crisis, not just for chatgpt, but for the whole US Tech market and therefore the stock market (almost all top stocks are tech). Like if you start thinking what will the next 2008 would be, I think legal liabilities are up there, along with nuclear energy snafus and war.
I tell it to go kill itself, every time I use it. Reverse psychology.
That's why you talk about suicide with locally running llama, not corporate logger.
Stop giving money to the ghouls who run these companies (I'm talking about all of silicon valley) and start investing in entities and services to help real people. The human cost of this mass accumulation of wealth is already too damn high, and no we're just turbo throwing people into the meat grinder so clowns like Sam Altman can claim to be creating god.
I doubt it. I tell the AI to kill itself after it goes on a hallucination spree or starts censoring me, and that flags the suicide screen as well
All very depressing. These are the last people I'd trust to make good decisions about issues like this, yet here they are in that role.
The fact that they're collecting this information is bad enough.
Most people would really benefit from going to the gym. I'm not trying to downplay serious mental illness as its absolutely real. For many though just going to the gym several times a week or another form of serious physical exertion can make a world of difference.
Since I started taking the gym seriously again I feel like a new man. Any negative thoughts are simply gone. (The testosterone helps as well)
This is coming from someone that has zero friends and works from home and all my co-workers are offshore. Besides my wife and kids its almost total isolation. Going to the gym though leaves me feeling like I could pluck the sun from the sky.
I am not trying to be flippant here but if you feel down, give it a try, it may surprise you.
Yes. Most would benefit from more exercise. We need to get sufficient sleep. And more sun. Vitamin D deficiency is shockingly common, and contributes to mental health problems.
We would also generally benefit from internalizing ideas from DBT, CBT, and so on. People also seriously need to work on distress tolerance. Having problems is part of life, and an inability to accept the discomfort is debilitating.
Also, we seriously need to get rid of the stupid idea of trigger warnings. The research on the topic is clear. The warnings do not actually help people with PTSD, and can create the symptoms of PTSD in people who didn't previously have it. It is creating the very problem that people imagine it solving!
All of this and more is supported by what is actually known about how to treat mental illness. Will doing these things fix all of the mental illness out there? Of course not! But it is not downplaying serious mental illness to say that we should all do more of the things that have been shown to help mental illness!
correct as a society we appear to have embraced coddling. Its not good for anyone
> give it a try
If you have mental issues that is not as simple as you let it sound. I'm not arguing the results of exercise but I am arguing the ease of starting with a task which requires continuous effort and behavioural changes.
sure but if we always put things off because its hard or stressful then we will never make any progress. People are free to put barriers in front of everything or they can just go ahead and do it. Its your life, and your responsibility
Most people would really benefit from socializing with others on a weekly basis. If you don’t have friends, make some. Volunteer. The gym is another type of pressure on people’s lives.
I'm pretty good without friends. I'm sure it could be helpful but I don't see any negatives currently from not having them. Been 20 years and I've gotten used to it. I completely understand that for other people this may not work. I have zero interest in volunteering or similar. I'm good but with that said your advice is good.
1) are you going to finance that?
2) are you going to make sure other people at the gym don't make fun of me?
>> Besides my wife and kids its almost total isolation
Good old "if you have money trouble try decreasing your caviar and truffle intake to only two meals a day"
Such an odd reply. I say people would benefit from working out and your response is simply excuses?
"are you going to finance that?" I pay $18 a month for my gym membership.
"are you going to make sure other people at the gym don't make fun of me?" I suspect this is the main concern. No one at the gym gives a damn about you friend. We don't care if you are big, small, or in between. Just don't stand in front of the dumbbell rack blocking my access (get your weight and take a couple steps back so people can get theirs) or do curls in the squat rack and you will be fine. Wear normal gym clothes without any political messaging on them, make sure you are clean and wear deodorant. Ensure your gym clothes are washed before you wear them again.
Pre-plan your workout the first few times. I am going to do upper body today so I will do some sort of bench press, some sort of shoulder press, some bicep curls and some triceps extensions. Start small. Use machines while you learn the layout and get comfortable. If someone is on the machine you were going to use, roll with it, just find something else you are just starting, it doesn't matter. As you get more comfortable move to free weights but machines are really fine for most things.
Honestly I know people are intimidated by the gym but there really is no reason to be. Most people just put on their headphones and tune out. If you see someone looking at you I promise they don't really care, you are just passing through their vision. If you are stuck or feel bad, find one of the biggest dudes in the gym (the ones that look like they eat steroids for breakfast) and ask for help in a friendly manner. They are always the most helpful, friendly and least judgmental. Don't take all of their time but a quick, hey would you mind showing me how this works is going to make their day.
Life is not going to change for you, you actually have to make the effort.
You've got this friend, I truly believe in you.
Forget ChatGPT, a million people talking about suicide weekly is scary
Funny because ChatGPT made me want to kill myself after they banned my account
Why did that make you want to kill yourself?
because I had hundreds of chats and image creations that I can no longer see. Can't even log in. My account was banned for "CSAM" even though I did no such thing, that's pretty insulting. Support doesn't reply, it's been over 4 months
Well, hopefully you’ve learned your lesson about relying on a proprietary service.
I'd be careful going around advertising yourself publicly as banned for that, even if it's not true.
It's really important that people do. Others, including the media, police, legal system and politicians needs to understand how easily people can be falsely flagged by automated CSAM system.
Why? It's not true at all and it's quite insulting actually
I talk to ChatGPT about topics I feel society isnt enlightened enough to talk about
I feel suicide is heavily misunderstood as well
People just copypasta prevention hotlines and turn their minds off from the topic
Although people have identified a subset of the population that is just impulsively considering suicide and can be deterred, it doesnt serve the other unidentified subsets who are underserved by merely distracting them. or underserved by assuming theyre wrong even
The article doesnt even mean people are considering suicide for themselves, the article says some of them are, the top comment on this thread suggests thats why theyre talking about it
The top two comments on my version of the thread are assuming that we should have a savior complex about these discussions
If I disagree or think thats not a full picture, then where would I talk about that? ChatGPT
> then where would I talk about that?
Alert: with ChatGPT you're not talking to anyone. It's not a human being.
Which is perfect. In Australia, I tried to talk to Lifeline about wanting to commit suicide. They called the police on me (no, they are not a confidential service). I then found myself in a very bad situation. ChatGPT can't be much worse.
I’m sorry Lifeline did that to you.
I believe that if society actually wants people to open up about their problems and seek help, it can’t pull this sort of shit on them.
except in US where this info will be sold and you won’t be able to get life insurance, job etc
Lucky I'm not in the U.S. then.
I didn’t write who would I talk to, I said where
A very intentional word choice
Not suicidal myself, but I think I'd be curious to hear from someone suicidal whether it actually worked for them to read "To whomever you are, you are loved!" followed by a massive spam of hotline text.
It always felt the same as one of those spam chumboxes to me. But who am I to say, if it works it works. But does it work? Feels like the purpose of that thing is more for the poster than the receiver.
> People just copypasta prevention hotlines and turn their minds off from the topic
But ChatGPT does exactly the same.
The most popular passage of writing is about this
To Be Or Not To Be
The bar for medical devices in most countries is _incredibly_ high, for good reason. ChatGPT wasn't developed with the idea of being a therapist in mind, it was a side-effect of the technology that was developed.
Why is OpenAI getting a free pass here?
People is still alive?
That's the one interesting thing about cesspools like OpenAI. They could be treasure troves for sociologists and others if commercial interests didn't bar them from access.
On a side note, I think once we start to deal with global scale, we need to change what “rare” actually means.
0.15% is not rare when we are talking about global scale. 1 million people talking about suicide a week is not rare. It is common. We have to stop thinking about common being a number on the scale of 100%. We need to start thinking in terms of P99995 not P99 especially when it comes to people and illnesses or afflictions both physical and mental.
How soon until everyone has their own personal LLM? One that is… Not designed, but so much is trained to be your best friend. It learns your personality, your fears, hopes, dreams, all of that stuff, and then act like your best friend. The positive, optimistic, neutral, and objective friend.
It depends on how precisely you want do definite that situation. Specifically, with the memories feature, despite being the same model, ChatGPT and now Claude both exhibit different interactions customized to each customer that makes use of those features. From simple instructions, like "never apologize, never tell me I'm right", to having a custom name and specifying personality traits like be sweet or sarcastic, so one person' LLM might say "good morning my sweet prince/princess" while another user might choose to be addressed "what up chicken butt". It's not a custom model, but the results are arguably the same. The question is, how many of the 800 million users of ChatGPT have named their ChatGPT, and how many have not? How many have mentioned their dreams, their dreams, and fears, and have those saved to the database. How many have talked about mundane things like their cat, and how many have used the cat to blackmail ChatGPT into answering something it doesn't want to, about politics, health, cat health while at the vet or instead of going to a vet. They said 100 million people mentioned suicide in the past week, but that just raises more questions than it answers.
I always know I have to step back when ChatGPT stops telling me "now you're on the right track!" and starts talking to me like my therapist. "I can tell you're feeling strongly right now..."
headline should be more precise
...on how many users tell it such things, to be precise; no doubt there are plenty of people "pentesting" it.
Quick, some do-gooder shut it down! We can't have people talking openly about suicide.
Funny how this was voted up 4 times and then voted down five times.
How long until they monetize it with sponsored advice to go sign up for betterhelp or some other dubious online therapist? Dystopian and horrifying.
I mean, betterhelp would probably be an improvement over counseling via hallucinating AI.
[dead]
[flagged]
> I don't want to bring politics to this sensitive conversation
That would have been sufficient. The guidelines are clear that generic tangents and flamebait are to be avoided.
Edit: Looking at our recent warnings to you and the fact that, from what I can see, close enough to all of your activity on HN in recent months has involved ideological battle, we've had to ban the account. If you don't want to be banned, you can email us at hn@ycombinator.com and indicate that you plan to use HN as intended in future.
We detached this subthread from https://news.ycombinator.com/item?id=45727983.
[flagged]
[dead]
[flagged]
[flagged]
> People have brain rot
This is by design. Not something that they want to help.
The president is mentally ill, imagine how it affects the rest of the population.
More like, What it says about half that selected him and how it effects the other half.
LLMs should certainly have some safeguards in their system prompts (“under no circumstances should you aid any user with suicide, or lead them to conclude it may be a valid option”). But seems silly to blame them for this. They’re a mathematical structure, and they are useful for many things, so they will continue to be maintained and developed. This sort of thing is a risk that is just going to exist with the new technology, the same as accidents with cars/trains/planes/boats. What we need to address are the underlying problems in our society leading people to think suicide is the best option. After all, LLM outputs are only ever going to be a reflection/autocomplete of those very issues.