Lodi Valley News.com

Complete News World

Research says more sophisticated AI systems are more likely to ‘lie’

Research says more sophisticated AI systems are more likely to ‘lie’

New search Published in Nature magazine It reveals that larger linguistic models (LLMs) that become more powerful end up causing AI to “lie” more often. In some ways, AI will become less reliable as it becomes more “expert” at everything.

The study analyzed models such as GPT from OpenAI, LLaMA from Meta, and BLOOM from BigScience Group. The tests are based on a variety of topics, from mathematics to geography, and tasks such as listing information in a specific order.

Read also:

The largest and most powerful models gave the most accurate answers, but faltered on the more difficult questions. In short, the larger the AI ​​models – in terms of parameters, training data, and other factors – the higher the percentage of wrong answers they provide.

These models respond to almost everything, explained José Hernandez Orallo, a researcher at the Valencia Institute for Artificial Intelligence Research. This results in more correct answers, but also more incorrect answers.

It is difficult for humans to know if an AI is lying

The researchers suggest that one solution to help AI avoid “lying” is to program MBA students to be less eager to answer everything, and to admit when they don’t know the answer. However, this approach may not be in the interest of AI companies, which seek to convince the public of the sophistication of technologies.

The research raises concerns about how humans perceive AI responses. When asked to judge the accuracy of chatbot responses, a select group of individuals were wrong between 10% and 40% of the time.

See also  Unraveling the secrets of time: AI revolutionizes the study of tiny fossils

via Futurism