Site icon Rapid Telecast

AI Ethics Lucidly Questioning This Whole Hallucinating AI Popularized Trend That Has Got To Stop

AI Ethics Lucidly Questioning This Whole Hallucinating AI Popularized Trend That Has Got To Stop

If you have been keeping up with the latest news about AI, you’d almost certainly believe that you were hallucinating.

Wait, hold on, I meant to say that you would almost certainly believe that the AI was hallucinating.

And you would have lots of solid reasons for believing so. The notion of AI that hallucinates seems to keep gaining rather wide popularity. This raises all manner of AI Ethics qualms and issues. For my ongoing and extensive coverage of AI Ethics and Ethical AI, see the link here and the link here, just to name a few.

Let’s start with some examples of how AI hallucination seems to be coming up.

Meta, formerly known as Facebook, recently released their latest AI conversational chatbot called BlenderBot 3. BlenderBot 3 has been preceded by BlenderBot 2 and BlenderBot the original. According to the official BlenderBot 3 website, the claim is that “BlenderBot 3 is capable of searching the internet to chat about virtually any topic, and it’s designed to learn how to improve its skills and safety through natural conversations and feedback from people in the wild.”

You might be familiar with similar efforts to make available conversational chatbots and for which sometimes the result was a bit untoward.

Turns out that people all throughout the globe would use such AI-powered chatbots and at times opt to undercut the AI. Some would nefariously feed fake info into the chatbot in hopes that the chatbot would accept the content as true when it was purposefully false. Others would relish experimenting to see if they could get these chatbots to emit curse words or make foul commentary of a biased or discriminatory nature. This is all reminiscent of the sage line that we can’t be reliably expected to take proper care of our new toys.

This led to public concerns that some of the chatbots would spew out false information and toxic remarks. Those that devilishly prodded the chatbots in that direction were at times felt to be victorious in their quest to befuddle and undercut the chatbots. But some insisted they were doing the right thing by showcasing how brittle and incomplete the chatbots were. The argument goes that these chatbots should not be placed into primetime use until they are ready for such exposure.

A counterargument often given is that the need to test and “grow” the chatbot requires that it be made available to the public at large. The viewpoint is that if a conversational chatbot is only guided by the AI developers, they will likely miss or omit all kinds of considerations that will only arise once the chatbot is made publicly available. The logic these days seems to be that the AI developers do as much beforehand prep as they feel that they can, and then they gingerly release their chatbots. When doing so, they try to loudly note that the conversational chatbot is in its early stage and that despite various precautions it might still nonetheless say unfortunate or troublesome things.

Into this societal milieu steps the BlenderBot 3. And, as per the BlenderBot 3 official description at the designated website, they have tried to anticipate and cope with the unpleasant possibilities: “We developed new techniques that enable learning from helpful teachers while avoiding learning from people who are trying to trick the model into unhelpful or toxic responses.”

The BlenderBot 3 (which insiders refer to this AI as BB3) primary technical report also explains that another facet to be dealt with involves AI hallucinations: “Hence, this task aims to avoid model hallucination (made-up facts). We also use a set of QA tasks as well, where the answer is viewed as a knowledge response output (even if it is a short phrase)” as mentioned in the research paper entitled “BlenderBot 3: A Deployed Conversational Agent That Continually Learns To Responsibly Engage” by co-authors Kurt Shustery, Jing Xuy, Mojtaba Komeiliy, Da Juy, Eric Michael Smith, Stephen Roller, Megan Ung, Moya Chen, Kushal Arora, Joshua Lane, Morteza Behrooz, William Ngan, Spencer Poff, Naman Goyal, Arthur Szlam, Y-Lan Boureau, Melanie Kambadur, and Jason Weston.

Accompanying the release of BB3 came some blaring headlines that BB3 supposedly is able to hallucinate. Reviewers suggested that since the conversational chatbot potentially at times portrays false facts as truths, this is an indication of AI hallucination. We are told that the hallucinating AI might even “forget” that it is a chatbot and start to “believe” or claim that it is human. For my coverage of the madcap rush of recent claims about AI being sentient and why this is unmitigated hogwash and dangerously misleading, see my analysis at the link here.

Meanwhile, shifting gears slightly, you might have noticed that there are a lot of emerging AI systems that try to translate text into imagery. I bring this up because the AI that does this is often depicted as being able to hallucinate. When you take a look at some of the imagery concocted by these text-to-art transformers, you can vividly see why some might think of the hallucination possibilities. The art or imagery that is produced will often seem like a dreamy state with reality distortions that we have grown accustomed to associating with human hallucinations experienced by painters and akin artists.

In the case of the conversational chatbot, the assumed principle for the AI developers is that they should try to minimize or even eliminate any chance of AI hallucinations. Ironically, one might suggest, that the text-to-art AI is devised to intentionally perform and exploit AI hallucinations.

Here’s a description of how one particular AI Machine Learning (ML) text-to-art model is said to function when intentionally aiming to perform AI hallucinating acts: “During training, there are two streams of translation: a source sentence and a ground-truth image that is paired with it, and the same source sentence that is visually hallucinated to make a text-image pair. First the ground-truth image and sentence are tokenized into representations that can be handled by transformers; for the case of the sentence, each word is a token. The source sentence is tokenized again, but this time passed through the visual hallucination transformer, outputting a hallucination, a discrete image representation of the sentence. The researchers incorporated an autoregression that compares the ground-truth and hallucinated representations for congruency — e.g., homonyms: a reference to an animal “bat” isn’t hallucinated as a baseball bat. The hallucination transformer then uses the difference between them to optimize its predictions and visual output, making sure the context is consistent” (as per “Hallucinating To Better Text Translation” by Lauren Hinkel, MIT News, June 2022).

There you have it, proclaimed AI hallucinations that include a visual hallucination transformer.

Kind of nifty, though presumably not a feature you want in an AI-based toaster nor in an AI driving system that guides a self-driving car. Whereas I’ve repeatedly emphasized that autonomous vehicles and self-driving cars won’t drink and drive (see my coverage of self-driving cars at the link here), and thus provide an inherent advantage over human drivers, one supposes that an AI hallucination within an AI driving system could be catastrophic. We might need to administer a DUI or equivalent to AI driving systems to see whether they were hallucinating when they bump into bicyclists, pedestrians, or human-driven cars.

Anyway, hallucinating AI seems to really be catching on.

Switch your attention to another example in the medical domain.

When an X-ray or MRI is undertaken, there is nowadays a likely chance that some kind of AI will be used to clean up the images on a reconstruction basis or otherwise analyze the imagery. Researchers caution that this can introduce AI hallucinations into the mix: “The potential lack of generalization of deep learning-based reconstruction methods as well as their innate unstable nature may cause false structures to appear in the reconstructed image that is absent in the object being imaged. These false structures may arise due to the reconstruction method incorrectly estimating parts of the object that either did not contribute to the observed measurement data or cannot be recovered in a stable manner, a phenomenon that can be termed as hallucination” (as stated in “On Hallucinations in Tomographic Image Reconstruction” by co-authors Sayantan Bhadra , Varun A. Kelkar, Frank J. Brooks, and Mark A. Anastasio, IEEE Transactions on Medical Imaging, November 2021).

Yikes!

Having AI hallucinations in art is pretty innocuous while having AI hallucinations when diagnosing medical conditions and having to decide on life-or-death medical choices is a whole different ballgame. The medical research study proffers this altogether disconcerting concern: “The presence of such false structures in reconstructed images can possibly lead to an incorrect medical diagnosis. Hence, there is an urgent need to investigate the nature and impact of false structures arising out of hallucinations from deep learning-based reconstruction methods for tomographic imaging” (as cited above).

We ought to take a deep breath and consider what this portends.

Here are my ten keystone points about so-called AI hallucinations:

1) The word “hallucination” is being borrowed from the conventional meaning associated with human hallucinating phenomena and being recast into the AI realm

2) There is no all-agreed standardized ironclad definition of what AI hallucination consists of

3) Some say that anytime AI doesn’t seemingly tell the truth it is under the guise of an AI hallucination

4) Others say that when AI garbles something or has a burp (that’s highly technical) the AI is suffering from an AI hallucination

5) AI hallucination is becoming an overly convenient catchall for all sorts of AI errors and issues (it is sure catchy and rolls easily off the tongue, snazzy one might say)

6) AI Ethics rightfully finds this trend disturbing as an insidious escape hatch for excusing AI problems

7) You can somewhat argue that this is partially beneficial in drawing attention to AI problems

8) We need though to be careful that this is furthering the anthropomorphizing of AI and creating widespread misconceptions about AI

9) The implication is that the AI of today is mentally akin to humans and thus able to hallucinate as humans do

10) Either we put the kibosh on the AI hallucination verbiage or at least find some alternative phrasing to use that is not so misleading

We will return to those ten key points momentarily and unpack them.

It might be useful to first clarify what I mean when referring to AI and also provide a brief overview of Machine Learning (ML) and Deep Learning (DL). There is a great deal of confusion as to what AI connotes. I would also like to introduce the precepts of AI Ethics to you, which will be integral to this discourse.

Stating the Record About AI

Let’s make sure we are on the same page about the nature of today’s AI.

There isn’t any AI today that is sentient.

We don’t have this.

We don’t know if sentient AI will be possible. Nobody can aptly predict whether we will attain sentient AI, nor whether sentient AI will somehow miraculously spontaneously arise in a form of computational cognitive supernova (usually referred to as The Singularity, see my coverage at the link here).

Realize that today’s AI is not able to “think” in any fashion on par with human thinking. When you interact with Alexa or Siri, the conversational capacities might seem akin to human capacities, but the reality is that it is computational and lacks human cognition. The latest era of AI has made extensive use of Machine Learning and Deep Learning, which leverage computational pattern matching. This has led to AI systems that have the appearance of human-like proclivities. Meanwhile, there isn’t any AI today that has a semblance of common sense and nor has any of the cognitive wonderment of robust human thinking.

Part of the issue is our tendency to anthropomorphize computers and especially AI. When a computer system or AI seems to act in ways that we associate with human behavior, there is a nearly overwhelming urge to ascribe human qualities to the system. It is a common mental trap that can grab hold of even the most intransigent skeptic about the chances of reaching sentience. For my detailed analysis on such matters, see the link here.

To some degree, that is why AI Ethics and Ethical AI is such a crucial topic.

The precepts of AI Ethics get us to remain vigilant. AI technologists can at times become preoccupied with technology, particularly the optimization of high-tech. They aren’t necessarily considering the larger societal ramifications. Having an AI Ethics mindset and doing so integrally to AI development and fielding is vital for producing appropriate AI, including the assessment of how AI Ethics gets adopted by firms.

Besides employing AI Ethics precepts in general, there is a corresponding question of whether we should have laws to govern various uses of AI. New laws are being bandied around at the federal, state, and local levels that concern the range and nature of how AI should be devised. The effort to draft and enact such laws is a gradual one. AI Ethics serves as a considered stopgap, at the very least, and will almost certainly to some degree be directly incorporated into those new laws.

Be aware that some adamantly argue that we do not need new laws that cover AI and that our existing laws are sufficient. In fact, they forewarn that if we do enact some of these AI laws, we will be killing the golden goose by clamping down on advances in AI that proffer immense societal advantages. See for example my coverage at the link here.

In prior columns, I’ve covered the various national and international efforts to craft and enact laws regulating AI, see the link here, for example. I have also covered the various AI Ethics principles and guidelines that various nations have identified and adopted, including for example the United Nations effort such as the UNESCO set of AI Ethics that nearly 200 countries adopted, see the link here.

Here’s a helpful keystone list of Ethical AI criteria or characteristics regarding AI systems that I’ve previously closely explored:

  • Transparency
  • Justice & Fairness
  • Non-Maleficence
  • Responsibility
  • Privacy
  • Beneficence
  • Freedom & Autonomy
  • Trust
  • Sustainability
  • Dignity
  • Solidarity

Those AI Ethics principles are earnestly supposed to be utilized by AI developers, along with those that manage AI development efforts, and even those that ultimately field and perform upkeep on AI systems. All stakeholders throughout the entire AI life cycle of development and usage are considered within the scope of abiding by the being-established norms of Ethical AI. This is an important highlight since the usual assumption is that “only coders” or those that program the AI is subject to adhering to the AI Ethics notions. As prior emphasized herein, it takes a village to devise and field AI, and for which the entire village has to be versed in and abide by AI Ethics precepts.

Let’s keep things down to earth and focus on today’s computational non-sentient AI.

ML/DL is a form of computational pattern matching. The usual approach is that you assemble data about a decision-making task. You feed the data into the ML/DL computer models. Those models seek to find mathematical patterns. After finding such patterns, if so found, the AI system then will use those patterns when encountering new data. Upon the presentation of new data, the patterns based on the “old” or historical data are applied to render a current decision.

I think you can guess where this is heading. If humans that have been making the patterned upon decisions have been incorporating untoward biases, the odds are that the data reflects this in subtle but significant ways. Machine Learning or Deep Learning computational pattern matching will simply try to mathematically mimic the data accordingly. There is no semblance of common sense or other sentient aspects of AI-crafted modeling per se.

Furthermore, the AI developers might not realize what is going on either. The arcane mathematics in the ML/DL might make it difficult to ferret out the now hidden biases. You would rightfully hope and expect that the AI developers would test for the potentially buried biases, though this is trickier than it might seem. A solid chance exists that even with relatively extensive testing that there will be biases still embedded within the pattern matching models of the ML/DL.

You could somewhat use the famous or infamous adage of garbage-in garbage-out. The thing is, this is more akin to biases-in that insidiously get infused as biases submerged within the AI. The algorithm decision-making (ADM) of AI axiomatically becomes laden with inequities.

Not good.

I believe that I’ve now set the stage adequately to examine the AI hallucination phenomenon.

Waking Up To Using Unwise Words For AI Usage

Let’s consider two major considerations about so-called AI hallucinations:

  • The wording of “AI hallucinations” is rather unfortunate
  • There is a real thing of AI that messes up or has problems, which we ought to give a different name and seek to avoid repurposing “hallucinations”

You can certainly sympathize with the propensity to call AI that messes up as having hallucinated. Everybody already generally is familiar with the word. As such, it seems like there is no harm and no foul in bending it to meet the needs of the AI field.

There are AI insiders that are surprised that anyone would be upset at the use of such phrasing. Let it be, they insist. No big deal. The word “hallucination” is convenient and denotes well the matter at hand. No other word quite so easily fits. If we came up with some altogether new word phrasing such as AI encabulation (I’ve previously covered this, see the link here), nobody would know what it means.

Returning to my ten points about AI hallucinations, a big problem is that the word “hallucination” is already fully associated with humans, and trying to replant the word into the AI field is not wise. The public is likely to right away assume that today’s AI can hallucinate in the same manner as humans do. Likewise, this further implies that contemporary AI is equivalent to the human mind. All in all, this is furthering the angst over anthropomorphizing AI.

The more that we use words already associated with humans, the more that we are going to suggest that modern AI is on par with humans. People might misjudge when using AI and assume that AI can do more than it really can. Expectations are going to get further out-of-whack. Problems are going to arise as this gets more prominent.

Another disturbing aspect that I’ve mentioned in my list of ten points is that the AI hallucination phrasing is becoming watered down and can be used to cover up all manner of poorly devised AI.

Ponder this dialogue:

  • Why did the AI go awry?
  • I don’t know, says an AI developer, it must have encountered an AI hallucination.

What a convenient excuse.

Most non-technical people would probably buy into the claim that it was an AI hallucination. It sounds nearly technical and notably significant. Wow, an AI hallucination. Darn those AI hallucinations. The focus on responsibility and accountability goes out the window. We just shrug our shoulders and say it was all an AI hallucination. No rhyme or reason, just a hallucination by the AI.

Sad.

Worse still, a bad path to proceed upon.

Defining A Mess And What Direction To Go

As a reminder, here again, are my ten keystone points about so-called AI hallucinations:

1) The word “hallucination” is being borrowed from the conventional meaning associated with human hallucinating phenomena and being recast into the AI realm

2) There is no all-agreed standardized ironclad definition of what AI hallucination consists of

3) Some say that anytime AI doesn’t seemingly tell the truth it is under the guise of an AI hallucination

4) Others say that when AI garbles something or has a burp (that’s highly technical) the AI is suffering from an AI hallucination

5) AI hallucination is becoming an overly convenient catchall for all sorts of AI errors and issues (it is sure catchy and rolls easily off the tongue, snazzy one might say)

6) AI Ethics rightfully finds this trend disturbing as an insidious escape hatch for excusing AI problems

7) You can somewhat argue that this is partially beneficial in drawing attention to AI problems

8) We need though to be careful that this is furthering the anthropomorphizing of AI and creating widespread misconceptions about AI

9) The implication is that the AI of today is mentally akin to humans and thus able to hallucinate as humans do

10) Either we put the kibosh on the AI hallucination verbiage or at least find some alternative phrasing to use that is not so misleading

Those that advocate for the AI hallucination as a viable expression are apt to indicate that for all its faults as a moniker, it does at least draw attention to AI-related issues that need focus.

Consider this traditional dialogue:

  • What is going on with our AI that seems to be fouling up?
  • I told you when we first designed the AI, says the AI developer, we needed to include sufficient resources to prevent or mitigate the chances of AI bugs or errors, but you wouldn’t do so and now we are in a heap of trouble.

Now consider this new-version dialogue with AI hallucinations inserted into the phrasing:

  • What is going on with our AI that seems to be fouling up?
  • I told you when we first designed the AI, says the AI developer, we needed to include sufficient resources to prevent or mitigate the chances of AI hallucinations, but you wouldn’t do so and now we are in a heap of trouble.

Does that seem more or less compelling to you?

Yet another point often raised is that AI hallucinations are a handy coinage since they can be construed as either negative or as positive. In the case of the text-to-art AI, you could seemingly boldly proclaim happily that your AI is better able to hallucinate than someone else’s AI. Score a plus one in the positive category for the advent of AI hallucinations.

Piling onto the side of those that favor the wording, they tell you to look at any everyday dictionary and you’ll become convinced that referring to AI hallucinations is perfectly sensible.

Here are some examples of how hallucination is generally defined:

  • “Hallucinations involve sensing things such as visions, sounds, or smells that seem real but are not. These things are created by the mind” (source: U.S. National Library of Medicine, online MedlinePlus).
  • “A hallucination is a false perception of objects or events involving your senses: sight, sound, smell, touch and taste. Hallucinations seem real, but they’re not. Chemical reactions and/or abnormalities in your brain cause hallucinations” (source: online Cleveland Clinic).
  • If you’re like most folks, you probably think hallucinations have to do with seeing things that aren’t really there. But there’s a lot more to it than that. It could mean you touch or even smell something that doesn’t exist. There are many different causes. It could be a mental illness called schizophrenia, a nervous system problem like Parkinson’s disease, epilepsy, or of a number of other things. If you or a loved one has hallucinations, go see a doctor” (source: WebMD online).

Though pundits favoring the phrasing of AI hallucinations might believe these kinds of definitions help their case, closer scrutiny seems to suggest that it weakens their case.

Notice that the hallucination definitions tend to refer to the mind and alludes to potential chemical reaction or abnormalities of the mind. AI of today is not like the brain nor like the mind. Attempts to liken AI to these facets are generally viewed as inappropriate.

I’ll add an additional twist that you might not have yet mulled over.

If we keep referring to AI hallucinations, will that slop over into the realm of human hallucinations and distort or disturb the meaning and significance of human hallucinations?

For example, suppose we take at face value that having AI hallucinations is a good thing, such as when referring to text-to-art. Would that seem to suggest that human hallucinations are a good thing too? You might have noticed that one of the definitions of hallucinations indicated that if a person is having hallucinations they ought to go see a medical doctor. If we become numb to the “hallucinations” wording by its expanding use for AI, perhaps this will lessen a sense of urgency or concern related to human hallucination aspects.

Good point, some exhort, while others say that is a bridge too far about wanting to suppress or get rid of the handy-dandy AI hallucinations moniker.

Conclusion

You might have in the recesses of your mind the fact that the famous play Macbeth made extensive use of hallucinations, for example:

  • “Come, let me clutch thee; I have thee not, and yet I see thee still. A dagger of the mind, a false creation, Proceeding from the heat-oppressed brain?”

Shakespeare seemed to have used the narrative tool of hallucinations to showcase a semblance of moral decay in characters in his writing. Should we leave hallucinations to the realm of humans and human minds, or shall we repurpose them into the field of AI?

Those within the AI field and even those outside of AI are embracing the AI hallucinations catchphrase, partially due to ease of use and partially due to the cachet that it seems to imbue on those that use it. Yes, indeed, the references to AI hallucination are abundantly gaining traction.

I assure you that this rapidly growing trend is not merely a hallucination.

See for yourself.

Stay connected with us on social media platform for instant update click here to join our  Twitter, & Facebook

We are now on Telegram. Click here to join our channel (@TechiUpdate) and stay updated with the latest Technology headlines.

For all the latest Technology News Click Here 

Read original article here

Denial of responsibility! Rapidtelecast.com is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – abuse@rapidtelecast.com. The content will be deleted within 24 hours.
Exit mobile version