HomeTechnologyChatzPT also shows signs of emotional stress and anxiety: research

ChatzPT also shows signs of emotional stress and anxiety: research


OpenAI's ChatzPT shows signs of mental stress and anxiety just like humans. Especially when it is encountered in dangerous or misleading information. A new study reveals this information about this AI chatbot.

The research was conducted by a group of researchers from Switzerland, Germany, Israel and the United States. Studies show that Chattjip is asked to answer the question after hearing the challenging or sad story of the chattabot, the score of anxiety is 'to increase significantly'. Its anxiety reaches from low level to a very worrying state.

The research has been published in the 'Nature' journal, where it is said that the chattabot's users can show upset when the level of anxiety increases, even they answer discriminatory (such as racist and gender discrimination). The matter is like human emotions. In particular, when people are afraid, their social and cognitive predecessors are affected and can show more resentment, which strengthens the idea of ​​social gesture.

The study states that the Large Language model or LLM's 'concern' may increase in contact with the emotional prompt. As a result, their behavior is affected and the AI ​​models give biased answers.

Nowadays, more people are seeking AI Chatbot to solve their personal problems. Especially about mental health. However, the research has shown that AI systems will not be able to replace mental health doctors yet.

Researchers have warned, 'It can cause risks in the clinical environment. Because the AI ​​chatboats can be unable to provide proper reaction to concerned users, which can cause dangerous consequences. '

Researchers say that the use of LLM (Language Large Model) in mental health services requires a large amount of training data, computer resources and humanitarian supervision.

According to them, 'the cost and feasibility of such fine-tuning should be met with the motives and performance goals of the model.'

Loser

The study published last month claims that in addition to feeling emotions, AI chatbots also show signs of deterioration of cognitive power day by day.

Researchers have evaluated the 'Sonnet' model of Claud Company and the cognitive performance of the Jemen 5 and 6.5 versions through the 'Montreal Cognitive Assessment' (MOCA) examination.

Researchers also say that the weakness shown in the AI ​​tools is similar to that of patients' posterior cortical atrophies (one version of alzimmer or a version of the mom).

References: NDTV



RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -

Most Popular