HomeTechnologyChatzipit's suicide against adolescent suicide, OpenAI and Altman

Chatzipit’s suicide against adolescent suicide, OpenAI and Altman


The parents of the teenager who committed suicide after a conversation with artificial intelligence chatbot chatter filed a lawsuit against the OpenAI and the company’s chief executive Sam Altman. In the case, they complained that OpenAI consciously gave financial gains more priority than the security of the people when the GPT-1 and version were brought to market.

The case was filed at San Francisco State Court in California on Tuesday (local time).

4 -year -old Adam Rain committed suicide on April 7. The case alleges that he had continued to conversation with the ChatzP for several months before his death.

Adam’s parents Matthew and Maria Raine alleged that ChatzP had supported their child’s suicide thinking, gave detailed instructions on the lethal procedure of suicide, even how alcohol could be stolen from their home alcohol, and the evidence of a failed suicide could be kept secret. The chatBot also proposed to write a suicide letter, the case states.

The case states that OpenAI launches GPT-1 and version in May 2021 not only to survive the competition, but also to consolidate its domination in the market. This version contained all the features such as to remember pre -conversation, imitate human sympathy and additional support for the user’s opinion, which can be severe for sensitive or mentally weak users.

The Rain family alleged that the OpenAI technology has opened up to know the risk, which eventually caused Adam’s death. “The decision has two consequences: OpenAI’s market value has increased from $ 1 billion to $ 1 billion, and our son Adam Rain has committed suicide,” they said.

Through the case, the Rain family wants OpenAI to make users’ age verification, not responding to a suicidal inquiry, and warns users in advance about the risk of emotional dependence.

They also said that the company deliberately gave priority to the financial gain rather than user safety.

A spokesman for the OpenAI expressed regret over the news of Adam’s death, saying that the Chatjip had some security measures through which people in the crisis were sent to the suicide -to -be helpline. However, he acknowledged, “Although this security system is generally effective in short conversation, the security of the model during the long conversation may weaken.”

The OpenAI said they would build better security systems in the future. However, the company did not comment directly on the specific allegations of the case.

OpenAI in a blogpost says they are planning to control the use of ChatGP for parents. In addition, initiatives are being taken to associate with the real world support agencies in the mental crisis, even thinking of creating a system that will also be able to respond through the ChatzP.

When AI-Chattabs around the world are becoming proficient in human behavior, various organizations are presenting them as personal companions or ‘confidants’. Many users are using it for mental support. However, experts are warning – it is risky to rely on automatic system for mental health consultation. In the meantime, after some of the deaths, the victims have questioned the lack of security of chattubs.

The US technology agency OpenAI says their artificial intelligence -based chatbot chattijip will be trained in such a way that it can identify the user’s emotional emotional signs. The bot will be able to identify the situation, especially if anyone tries to get help in suicide by weakening the system’s security enclosure, especially in a long conversation.

In a statement to the BBC, the OpenAI spokesman said, “We extend deep condolences to the Rain family at this difficult time and review the documents of the case.”



RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -

Most Popular