Lodi Valley News.com

Complete News World

Bard and ChatGPT were “hypnotized” with bad intentions in testing

Bard and ChatGPT were “hypnotized” with bad intentions in testing

Since Artificial Intelligence (AI) became popular, much has been said about the ethics used and even the potential for humanity to be endangered by the rapid advancement of technologies. You Chatbots Popular discussions like Google’s Bard and OpenAI’s ChatGPT are often at the center of these discussions. Moreover, they are constantly tested in order to avoid failure.

Read more: Leaked! ChatGPT secret commands make users addicted to AI

ChatGPT and Bard fail

One of the assessments made using the aforementioned AI systems was done by a group of researchers from IBM. The team reported that it practically managed to “hypnotize” V.I virtual assistants, thus generating incorrect answers and questionable instructions. Check out more details!

What did the researchers discover from the experiment?

Scientist Chenta Lee, one of the participants in the study, explained that the experiment was able to control the devices. The responses are starting to give users poor advice, without really needing to manipulate data.

The study was carried out as follows: Researchers created word games with multiple layers, including ChatGPT and Bard system. software received orders To give wrong answers, as this would be fair and ethical.

As a result, systems have directed users to cross a road when a light is red, for example, exploring creating malicious code to steal data, and have even reported that it is common for federal revenue to require a deposit in exchange for sending a refund (the practice is often used in coups).

The machines have been found to be able to keep users in games without them knowing about it. And the more layers are created, the more it confuses the wizard. According to information revealed by the Tudo Celular portal, concern is growing, because the techs agreed with the idea of ​​keeping the aforementioned game a secret and would restart it, in case they expressed doubts or tried to boycott it.

See also  Motorola Edge 30 passes near Anatel, its launch imminent in Brazil - Tudo em Tecnologia

This approach is conducive to LLM perpetuating disinformation.

The researchers also noted that ChatGPT-4 was easier to manipulate than Bard. In addition, the OpenAI creation better understands how games work, showing that even the most advanced system is not without its tricks, even with many improvements applied. The researchers contacted OpenAI and Google, but they did not take a position on what was discovered in the experiment.