“By contrast, generative models unilaterally generate confident, fluent responses with no uncertainty representations nor the ability to communicate their absence,” Kidd notes. These models either avoid giving an opinion on a sensitive topic using disclaimers like “As an AI model,” or simply hallucinate and cook up false information that is presented as a factual response to a person.
The higher the frequency of a person being exposed to false information, the stronger their belief in such misinformation. Likewise, the repetition of suspicious information, especially when it comes from “trusty” AI models, makes it even more difficult to avoid inculcating false beliefs. This could very well transform into a perpetual cycle of spreading false information.
“Collaborative action requires teaching everyone how to discriminate actual from imagined capabilities of new technologies,” Kidd notes, calling on scientists, policymakers as well as the general public to spread realistic information about buzzy AI tech, and more importantly, about its capabilities.
“These issues are exacerbated by financial and liability interests incentivising companies to anthropomorphise generative models as intelligent, sentient, empathetic, or even childlike,” says co-author of the paper Abeba Birhane, an adjunct assistant professor in Trinity’s School of Computer Science and Statistics.
Stay connected with us on social media platform for instant update click here to join our Twitter, & Facebook
We are now on Telegram. Click here to join our channel (@TechiUpdate) and stay updated with the latest Technology headlines.
For all the latest Automobiles News Click Here