Lodi Valley News.com

Complete News World

OpenAI warns that ChatGPT via voice could cause emotional dependence in users

OpenAI warns that ChatGPT via voice could cause emotional dependence in users

The company says risk can be good for single people, but bad for those in healthy relationships.

American beginning OpenAIthe developer who owns ChatGPTHe stated, in a document published today, Friday, the 9th, that: ChatGPT-4oThe company’s most advanced model, unveiled in May, could make users emotionally attached to it. artificial intelligence When used in voice mode, where spoken commands are executed – similar to Alexa, Amazonand Siri, from apple.

The document says that OpenAI’s internal tests claim that “users can form social relationships with AI.” Furthermore, the AI ​​reduced the “need for human interactions” in the people analyzed, with detrimental effects for people in healthy relationships but beneficial for those who are lonely.

The document also notes that OpenAI’s AI models have proven themselves capable of being controlled by humans, without attempting to break control, deceive individuals, or design catastrophic plans.

the ChatGPT-4o It can receive commands and generate responses in text, voice, video, and image formats. The voice chat function was launched in July, with a focus on making the dialogue more realistic, natural, and less robotic for the user.

In May, when the chatbot was unveiled, OpenAI was criticized for using a voice similar to that of an actress in the bot. scarlett johansson (known for the movie shefrom 2013, in which you play a Siri-inspired digital assistant), which had rejected a request to voice the AI ​​months earlier. After receiving negative feedback from the translator herself, the company backtracked and changed the voice the AI ​​uses.

See also  NH newspaper on WhatsApp: Learn how to receive news on the newspaper’s exclusive channel directly on your cell phone - Novo Hamburgo

Documents published by the startup also indicate that the voice mode can malfunction in response to random noise. In another case, tests indicated that ChatGPT adopts a voice similar to that of the interlocutor. Furthermore, it was found that the chatbot can be more effective at convincing people to one point of view than to another.

It was also discovered that during testing, ChatGPT could react inappropriately, revealing sexual, violent, or even nervous behavior in response to users.