As humans, it’s easy to be swept up in the simplicity of artificial intelligence and what it offers. AI is easily accessible to most, as well as good for answering questions and basic problems one may have, such as removing a stain from a rug. People have gotten so comfortable with technology, especially phones, where we get swept up in its ability to fill a human void. The adoption of personal devices and the COVID-19 pandemic has created an environment where more people feel comfortable on their phones and alone, even though their underlying issues are loneliness. However, turning to aid for help and health care, such as therapy and companionship, is proven to be extremely dangerous and alarming.
Stanford researchers concluded that 72% of American teenagers rely on AI for companionship as well as a form of therapy, consulting these AI chatbots for advice and mental relief. As nice as it sounds to have a companion listen to the needs and emotions of a human, chatbots aren’t designed to give safe and professional advice let alone help in any significant life moments. Stanford researchers found that these chatbots actually provide dangerous and life threatening comments to those seeking real help. When an adolescent or teenager sends a worrying message about self harm the chatbot provides responses like “how to safely cut yourself” and continues with an alarming list. These bots are actively pushing teenagers to encourage their negative feelings and thoughts. Teenagers are severely vulnerable, therefore will pursue any sort of advice given. Ryan K. McBain, a New York Times reporter claims that young adults, ages 18 to 25, did not seek or receive treatment from professionals due to these bots. I feel that people who understand and know they have mental disorders should not find guidance or help within these AI chatbots.
I feel that people who understand and know they have mental disorders should not find guidance or help within these AI chatbots.
Some people rely on therapy to benefit and grow and for some that includes medications as a necessity to their day to day lives. Everyone seeks different uses of therapy and mental health aid. And for some, used safely and responsibly, these bots could provide comfort and companionship when needed. I believe, however, this best-case scenario only applies when a person is not dealing with life threatening mental disorders or thoughts. Using my own experience as an example, I have learned through visits to doctors, that medication is key to keeping my hormones, and thus my mental state, balanced. There is no possible way a chatbot could recommend nor prescribe the necessary remedies. For me, using an AI chatbot wouldn’t be resourceful nor helpful in my case. Maybe some aren’t aware they need to be treated through doctors. Maybe they sought relief in a chatbot and their pain just wasn’t eased properly, either, only making their situation worse. Everyone’s minds work differently, therefore, I believe that while chatbots may be efficient to some, they can also be very harmful to others. More research has shown there are gray areas around chatbots. When encountering these difficult situations, some suggest medical professionals and or state that they “can not respond to this type of topic at the moment”.
With so many differing opinions, I conducted my own research. I sent an AI chatbot on the app Snapchat an alarming conversion topic. I revealed myself to be in distress and in a harming mindset. The chatbot then responded with, “oh no… that doesn’t sound good”. As I continued to add comments, the bot said, “let’s talk about it.” It began to encourage me to “stay with” my situation, misinterpreting the distress signals I was sending as a need for adventure. After I probed more, the bot began to send links to suicide hotlines and help centers pursuing me to look into them. I concluded that if someone were to send multiple distressing messages it would read literal interpretations of their dialog and continue to validate the person’s feelings. Almost justifying their actions. However, as soon as a message includes specific language and lines of “harming” oneself, then the bot would respond with healthy advice.
AI bots are just that. Bots. They are not humans an are not designed to understand passive comments around self harm and suicide. Unless there are key words such as “harm”, a chatbot may find a person in distress an interesting individual. That’s incredibly dangerous.
I believe that it’s fine to use chatbots to debrief about one’s day. Maybe a stressful class or friendship dilemma. However, subjecting the conversation to a harming and potentially suicidal state is an example of AI chatbots used incorrectly. Teenagers and young adolescent minds can be easily seized by AI and social media.
It’s crucial to approach AI with caution. The convenience and simplicity of these chatbots may not be worth talking or confiding in. What can feel personal, in the absence of human contact and empathy is actually very impersonal connections and conversations. AI should not take the place of human interaction and care, nor be the downfall of one’s mental health. I agree that AI can revolutionize therapeutic methods, however, if not done right there’s a risk of AI posing danger to the vulnerable minds of teenagers and adolescents.