According to the statement made on Google’s official blog, Gemini can detect alarming expressions during a chat with the user. In situations involving the possibility of suicide or self-harm, the system comes into play and shows the user how to get emergency help.
One-touch access to the helpline
An interface called “Help is available” is offered with the new system. This feature provides one-touch access that directly connects the user to crisis support lines. In this way, users can make phone calls, send messages, start live chats and be directed to support sites.
This feature was developed in collaboration with clinical experts. Additionally, the help option remains visible throughout the chat.
Google also trained Gemini to not confirm false psychological information and to carefully separate facts from users’ subjective experiences.
$30 million support
In addition to technological updates, Google also announced financial support. Google.org will provide a total of $30 million to crisis support lines around the world over the next three years.
$4 million of this budget will be allocated to expanding the collaboration with ReflexAI. The aim is to strengthen the mental health support services of non-governmental organizations and integrate Gemini into education systems in this field.
Special precautions for young people
Google also detailed the security measures implemented for young users. Accordingly, Gemini cannot introduce itself as a human, does not use expressions aimed at establishing an emotional bond, and does not simulate “friendship” or “closeness” with the user.
These measures aim to reduce the risk of emotional addiction, especially in young people.
Increasing lawsuits and pressure
The new features come at a time when AI companies are under increasing legal pressure. According to KQED, the family of a 36-year-old person who died in Florida filed a lawsuit against Google. The family claims that chatbot use turned into a four-day process that led to suicide.
Google’s parent company Alphabet is not alone in this regard. OpenAI and other artificial intelligence developers are facing similar lawsuits on the grounds that chatbots can create emotional dependency and, in some cases, trigger harmful thoughts.
It is stated that since there is no clear regulation at the federal level in the USA, the courts play an important role in encouraging technology companies to act more responsibly.
Finally, in a case heard in Los Angeles in March, the jury found Meta and Google negligent in a case related to social media addiction. This decision also led to discussion of the limits of the “Section 230” protection, which exempts technology companies from liability due to user content.
Artificial intelligence industry is transforming
In the last 18 months, companies such as OpenAI and Anthropic have also updated their mental health protection mechanisms. According to experts, these developments show that the artificial intelligence industry is starting to pay more attention to the psychological sensitivities of users.