Lodi Valley News.com

Complete News World

Why is being polite to ChatGPT a good thing?

Why is being polite to ChatGPT a good thing?

Since popularization ChatGPTOne of the main points commented on by social media users is Education with chatbot. While some ask “please” and say “thank you,” others are more direct and go straight to the question. However, a recent article by researchers from Microsoft stated that generative artificial intelligence (AI) models work best when handled in a human way.

Read also: Never tell ChatGPT your name, experts warn; The reason is scary

In other words, asking questions politely can make AI deliver better results. Furthermore, the researchers reported that models have more assertive responses when asked urgently or important. Therefore, by communicating how important an answer is, as well as indicating that it is essential to a professional career, chatbots provide more complete and accurate answers.

Education can “gamify” chatbots

According to Allen Institute for AI scientist Noha Dziri, polite requests “gamify” the AI ​​model’s mechanics. In an interview with TechCrunch, Noha said that requests activate parts of the model that are not activated when requests occur directly and with less emotion.

“The more textual data they see during training, the more efficient they become. Therefore, 'being better' entails clarifying your requests in a way that is consistent with the compliance standard that the models were trained on, which can increase the likelihood that they will deliver the desired output,” Dziri explained.

However, the scientist also states that learning using AI does not guarantee that the model generates responses similar to those of a human. Furthermore, being overly polite when asking for a response from an AI assistant can also be a big problem. This is because the form may lead to results outside the scope of the internal audit body's standards of conduct, such as generating offensive language and leaking personally identifiable information.

“Requests [podem] Explore areas where the model's safety training leaves something to be desired, but where there are possibilities for follow-up instructions [do modelo] Stand out,” concluded the scientist.