Site icon Rapid Telecast

AI Ethics And AI-Induced Psychological Inoculation To Help Humans With Disinformation

AI Ethics And AI-Induced Psychological Inoculation To Help Humans With Disinformation

What are we going to do about the massive glut of disinformation and misinformation?

It all is demonstrably getting worse and worse, with each passing day.

Perhaps Artificial Intelligence (AI) can come to our rescue. Yes, that’s right, we might be able to harness the beneficial uses of AI to cope with our relentless tsunami of disinformation and misinformation. We might be wise to try doing so. Every avenue of potential solution would seem worthy of pursuit.

As an aside, I’d like to immediately acknowledge and note that AI is undoubtedly going to also be a part of the problem too. There is no question that humans can readily leverage AI to generate disinformation and misinformation. Furthermore, AI can insidiously be used to make disinformation and misinformation appear to be amazingly valid and fool humans into believing that the presented information is alluringly accurate and factual. A decidedly sad face side of what AI brings to the table. We will come back to this downside conundrum toward the end of this discussion.

For now, let’s put on our smiley faces and explore how AI is beneficial to bringing disinformation and misinformation to its mighty knees. One important undercurrent will be that all of this dovetails into vital elements of AI Ethics. My column coverage of AI Ethics and Ethical AI is ongoing and extensive, including the link here and the link here, just to name a few.

Consider these cornerstone ways that AI can be an especially helpful ally in the war on disinformation and misinformation:

  • Stop At The Get-Go: AI can be used to detect and try to excise disinformation and misinformation before it gets loose
  • Filter Before Seen: AI can be used to filter disinformation and misinformation so that you don’t need to worry about seeing it
  • Prepare You To Be Immune: AI can be used to bolster your readiness and ability to contend with disinformation and misinformation (known somewhat formally as providing a kind of psychological inoculation)
  • Other

The first listed bullet point entails trying to stop disinformation and misinformation at the earliest possible moment, prior to the content getting into the world.

This is a highly problematic approach. Some would argue vociferously that this might be a Big Brother attempt to suppress freedom of speech. How far would this AI be able to go? Could it prevent people from freely expressing their views? This eerily could become a slippery slope of AI ultimately becoming the worst nightmare of evil results that innocently started with the best of intentions.

I’m sure you get the picture.

The second bullet point is a bit more moderate and suggests that we could use AI to filter content for us.

You might have an AI filter bot that will scan all your incoming data feeds from various news and other sources. The AI is tailored to catch any disinformation or misinformation that fits your personal criteria. Thus, in such a scenario, it isn’t a Big Brother censorship situation. You control the AI and how it is filtering your veritable inbox of info on your behalf.

Sounds pretty good.

There are though some noteworthy concerns.

For example, we are already greatly polarized in our views and this use of AI might make that polarization deeper and darker. Imagine that with this slick AI that works nonstop 24×7, you never ever need to see a whit of information that you have classified as being potentially disinformation and misinformation. Your polarized perspective is now almost guaranteed to remain intact. All day long and whenever you seek to take a look at the info awaiting your attention, it is always fully preselected, and no chance of glancing at so-called disinformation and misinformation.

I say that disinformation and misinformation can be so-called because there is a tremendous amount of controversy over what actually constitutes disinformation and misinformation. Some pundits insist that there is an absolute basis for ascertaining what is disinformation and what is misinformation. There is right and wrong. Everything can be calculated without erring that something is either disinformation or misinformation.

Not everyone sees things as being quite so clear-cut.

The proverbial on-or-off mutually exclusive dichotomy contrast is said to be a misleading frame of mind. One person’s disinformation might not be considered disinformation to another person. Likewise about misinformation. The assertion is that disinformation and misinformation range in terms of nature and magnitude. Trying to definitively classify all information into one pile or the other is a lot harder than hand waving suggests.

The gist is that the second bullet point about using AI as a filtering mechanism has its tradeoffs. There is little question that AI is going to be increasingly put to this use. At the same time, we need to be mindful of the challenges that such AI is going to bring to the fore. AI as a filter for disinformation and misinformation is not some silver bullet or slam dunk.

That takes us to the third point, namely the possibility of using AI to make humans better at dealing with disinformation and misinformation.

I doubt that you’ve probably heard much about this third avenue of using AI in this context. It is just beginning to emerge. You are now at the cutting edge of something that will likely grow and gradually be put into use. Please know though that as this popularity expands, controversy over whether it is a suitable approach is going to also become highly visible.

Part of the issue is that AI is somewhat being used for what some would in a derogatory fashion refer to as playing mind games with humans.

That seems ominous.

This also brings us to the realm of AI Ethics.

All of this also relates to soberly emerging concerns about today’s AI and especially the use of Machine Learning and Deep Learning as a form of technology and how it is being utilized. You see, there are uses of ML/DL that tend to involve having the AI be anthropomorphized by the public at large, believing or choosing to assume that the ML/DL is either sentient AI or near to (it is not). In addition, ML/DL can contain aspects of computational pattern matching that are undesirable or outright improper, or illegal from ethics or legal perspectives.

It might be useful to first clarify what I mean when referring to AI overall and also provide a brief overview of Machine Learning and Deep Learning. There is a great deal of confusion as to what Artificial Intelligence connotes. I would also like to introduce the precepts of AI Ethics to you, which will be especially integral to the remainder of this discourse.

Stating the Record About AI

Let’s make sure we are on the same page about the nature of today’s AI.

There isn’t any AI today that is sentient.

We don’t have this.

We don’t know if sentient AI will be possible. Nobody can aptly predict whether we will attain sentient AI, nor whether sentient AI will somehow miraculously spontaneously arise in a form of computational cognitive supernova (usually referred to as The Singularity, see my coverage at the link here).

Realize that today’s AI is not able to “think” in any fashion on par with human thinking. When you interact with Alexa or Siri, the conversational capacities might seem akin to human capacities, but the reality is that it is computational and lacks human cognition. The latest era of AI has made extensive use of Machine Learning and Deep Learning, which leverage computational pattern matching. This has led to AI systems that have the appearance of human-like proclivities. Meanwhile, there isn’t any AI today that has a semblance of common sense and nor has any of the cognitive wonderment of robust human thinking.

Part of the issue is our tendency to anthropomorphize computers and especially AI. When a computer system or AI seems to act in ways that we associate with human behavior, there is a nearly overwhelming urge to ascribe human qualities to the system. It is a common mental trap that can grab hold of even the most intransigent skeptic about the chances of reaching sentience.

To some degree, that is why AI Ethics and Ethical AI is such a crucial topic.

The precepts of AI Ethics get us to remain vigilant. AI technologists can at times become preoccupied with technology, particularly the optimization of high-tech. They aren’t necessarily considering the larger societal ramifications. Having an AI Ethics mindset and doing so integrally to AI development and fielding is vital for producing appropriate AI, including the assessment of how AI Ethics gets adopted by firms.

Besides employing AI Ethics precepts in general, there is a corresponding question of whether we should have laws to govern various uses of AI. New laws are being bandied around at the federal, state, and local levels that concern the range and nature of how AI should be devised. The effort to draft and enact such laws is a gradual one. AI Ethics serves as a considered stopgap, at the very least, and will almost certainly to some degree be directly incorporated into those new laws.

Be aware that some adamantly argue that we do not need new laws that cover AI and that our existing laws are sufficient. They forewarn that if we do enact some of these AI laws, we will be killing the golden goose by clamping down on advances in AI that proffer immense societal advantages. See for example my coverage at the link here.

In prior columns, I’ve covered the various national and international efforts to craft and enact laws regulating AI, see the link here, for example. I have also covered the various AI Ethics principles and guidelines that various nations have identified and adopted, including for example the United Nations effort such as the UNESCO set of AI Ethics that nearly 200 countries adopted, see the link here.

Here’s a helpful keystone list of Ethical AI criteria or characteristics regarding AI systems that I’ve previously closely explored:

  • Transparency
  • Justice & Fairness
  • Non-Maleficence
  • Responsibility
  • Privacy
  • Beneficence
  • Freedom & Autonomy
  • Trust
  • Sustainability
  • Dignity
  • Solidarity

Those AI Ethics principles are earnestly supposed to be utilized by AI developers, along with those that manage AI development efforts, and even those that ultimately field and perform upkeep on AI systems. All stakeholders throughout the entire AI life cycle of development and usage are considered within the scope of abiding by the being-established norms of Ethical AI. This is an important highlight since the usual assumption is that “only coders” or those that program the AI are subject to adhering to the AI Ethics notions. As prior emphasized herein, it takes a village to devise and field AI, and for which the entire village has to be versed in and abide by AI Ethics precepts.

Let’s keep things down to earth and focus on today’s computational non-sentient AI.

ML/DL is a form of computational pattern matching. The usual approach is that you assemble data about a decision-making task. You feed the data into the ML/DL computer models. Those models seek to find mathematical patterns. After finding such patterns, if so found, the AI system then will use those patterns when encountering new data. Upon the presentation of new data, the patterns based on the “old” or historical data are applied to render a current decision.

I think you can guess where this is heading. If humans that have been making the patterned upon decisions have been incorporating untoward biases, the odds are that the data reflects this in subtle but significant ways. Machine Learning or Deep Learning computational pattern matching will simply try to mathematically mimic the data accordingly. There is no semblance of common sense or other sentient aspects of AI-crafted modeling per se.

Furthermore, the AI developers might not realize what is going on either. The arcane mathematics in the ML/DL might make it difficult to ferret out the now hidden biases. You would rightfully hope and expect that the AI developers would test for the potentially buried biases, though this is trickier than it might seem. A solid chance exists that even with relatively extensive testing that there will be biases still embedded within the pattern matching models of the ML/DL.

You could somewhat use the famous or infamous adage of garbage-in garbage-out. The thing is, this is more akin to biases-in that insidiously get infused as biases submerged within the AI. The algorithm decision-making (ADM) of AI axiomatically becomes laden with inequities.

Not good.

I believe that I’ve now set the stage to sufficiently discuss the role of AI as a means to prompt psychological inoculation related to dealing with disinformation and misinformation.

Getting Into The Minds Of Humans

Let’s start with the basics or fundamentals underlying misinformation and disinformation.

Generally, misinformation refers to false or misleading information.

Disinformation is roughly the same though consists of the added element of intent. We normally construe information as being disinformation when it is the information intended to misinform.

I might tell you that it is currently 10 o’clock at night, which let’s say is false because the time is really midnight. If I had told you 10 o’clock as a hunch and was not trying to be deceptive, we would usually say that I had misinformed you. I had conveyed misinformation. Maybe I was lazy or perhaps I truly believed it was 10 o’clock. On the other hand, if I had mentioned 10 o’clock because I intentionally wanted to deceive you into thinking that the time was 10 o’clock and that I knew the time was actually midnight, this could be said to be a form of disinformation.

One notable aspect of information overall is that typically we are able to spread around information and thus information can become somewhat widespread. Information can veritably flow like water, in a broad sense.

I tell you it is 10 o’clock at night. You now have that particular piece of information. You might yell aloud to a group of nearby people that it is 10 o’clock at night. They now also have that same information. Perhaps some of those people get onto their cell phones and call other people to tell them that it is 10 o’clock. All in all, information can be spread or shared and sometimes done so quickly while in other instances done so slowly.

In a sense, you could contend that information can go viral.

There is a coined word or terminology that you might not particularly have seen or used that helps in describing this phenomenon of information going viral, the word is infodemic. This word is a mashup of both information and being an epidemic. By and large, an infodemic is associated with circumstances involving the spread of misinformation or disinformation. The notion is that false or misleading information can go viral, undesirably, similar to the undesirable spread of disease or illnesses.

In the example about the time being 10 o’clock at night, this seeming fact was a piece of information that was spread to the nearby group of people. They in turn spread the fact to others. If the 10 o’clock was fakery then this particular instance of disinformation or misinformation was spread to many others. They might not know that the information was misinformation or possibly disinformation.

I trust that all these definitions and fundamentals seem sensible and you are onboard so far.

Great, let’s continue.

I’ve led you somewhat surreptitiously into something that occupies a great deal of fascination and also angst. The gist is that there are arguably reasonably sound parallels between what diseases do virally and what misinformation or disinformation does virally.

Not everyone agrees with these claimed parallels. Nonetheless, they are intriguing and worthy of consideration.

Allow me to elaborate.

You see, we can try to leverage the handy analogy of referring to human-borne diseases and illnesses that spread, doing so to compare a like possibility with the spread of misinformation and disinformation. To try and stop the spread of diseases, we can aim to early detect and seek to contain an emerging diseased source point the potential spread of the ailment. Another approach to deal with a spreading disease would be to guard against getting it via the prudent use of wearing a mask or protective gear. A third approach could consist of taking vaccinations to try and build your immunity related to the disease.

We have now come full circle in that those same approaches to coping with diseases can be explicitly likened to dealing with misinformation and disinformation. I earlier mentioned that there are akin efforts underway to employ Artificial Intelligence for purposes of trying to cope with disinformation and misinformation, notably (as mentioned earlier):

  • Stop At The Get-Go: AI can be used to detect and try to excise disinformation and misinformation before it gets loose
  • Filter Before Seen: AI can be used to filter disinformation and misinformation so that you don’t need to worry about seeing it
  • Prepare You To Be Immune: AI can be used to bolster your readiness and ability to contend with disinformation and misinformation (known somewhat formally as providing a kind of psychological inoculation)
  • Other

The third aspect will be of most interest herein.

Here’s the deal.

We know that diseases usually strike the human body. With the analogy of how misinformation and disinformation occur, we could suggest that foul information strikes at the human mind. Yes, you can presumably come into contact with disinformation or misinformation that flows into your mind. The disinformation or misinformation potentially corrupts or poisons your way of thinking.

A human body can be vaccinated to try and prepare itself for coming into contact with diseases. A big question arises about whether we can do the same for the human mind. Is it possible to try and inoculate the mind so that when disinformation or misinformation comes to your mind that you are ready for it and have been inoculated accordingly?

A field of study known as psychological inoculation posits that the mind can indeed be inoculated in the sense of being readied to handle misinformation or disinformation.

Consider this description in a recent research study regarding psychological inoculation and what is sometimes labeled as doing prebunking:

  • “Debunking misinformation is also problematic because correcting misinformation does not always nullify its effects entirely, a phenomenon known as the continued influence effect. Accordingly, in contrast to debunking, prebunking has gained prominence as a means to preemptively build resilience against anticipated exposure to misinformation. This approach is usually grounded in inoculation theory. Inoculation theory follows a medical immunization analogy and posits that it is possible to build psychological resistance against unwanted persuasion attempts, much like medical inoculations build physiological resistance against pathogens” (Science Advances, August 24, 2022, “Psychological Inoculation Improves Resilience Against Misinformation On Social Media” by co-authors Jon Roozenbeek, Sander van der Linden, Beth Goldberg, Steve Rathje, and Stephan Lewandowsky).

Returning to my example about the time being 10 o’clock at night, suppose that I had previously told you that sometimes the claimed time is not the actual time. You henceforth have a form of inoculation to be wary of claimed times. This inoculation has prepared you for coming into contact with claimed times that are disinformation or misinformation.

If I had forewarned you several years ago about claimed times not being actual times, there is a chance that you might not think of that long ago warning. Thus, the earlier inoculation has (shall we say) worn off. My inoculation for you might need to be boosted.

There is also a chance that the inoculation wasn’t specific enough for you to use it when so needed. If I had years ago warned you about claimed times versus actual times, that might be overly broad. The inoculation might not work in the specific instance of your being told about 10 o’clock. In that sense, perhaps my inoculation should have been that you should be wary when a claimed time of 10 o’clock is used. Of course, inoculations in the case of diseases are somewhat the same, at times being very specific to known ailments while in other cases being a broad spectrum.

An oft-cited research study done in 1961 on psychological inoculation by William McGuire of Columbia University is generally now considered a classic in this field of study. You might find of interest these key points he made at that time:

  • “Such generalized immunization could derive from either of two mechanisms. Pre-exposure might shock the person into realizing that the “truisms” he has always accepted are indeed vulnerable, thus provoking him to develop a defense of his belief, with the result that he is more resistant to the strong counterarguments when they come. Alternatively, the refutations involved in the pre-exposure might make all subsequently presented counterarguments against the belief appear less impressive” (William McGuire, “Resistance To Persuasion Conferred By Active And Passive Prior Refutation Of The Same And Alternative Counterarguments”, Journal of Abnormal and Social Psychology, 1961).

Do you find this analogy of inoculations and immunization a useful and apt comparison to the realm of misinformation and disinformation?

Some do, some do not.

For purposes of this discussion, please accept that the premise is reasonable and apt.

How are we to try and inoculate or immunize people’s minds?

We could get people to read books that might enlighten their minds. We might tell them about it, or have them watch videos or listen to audio tapes. Etc.

And we might use AI to do the same.

An AI system might be devised to be your inoculator. Whenever you start to go online such as looking at the Internet, an AI-based app might prepare you for your online journey. The AI might feed you a teensy tiny amount of disinformation that is labeled as such, allowing you to realize that you are about to be seeing something that is intentionally false.

Upon exposure to this AI-fed disinformation, your mind is now getting primed to cope with disinformation or misinformation that you might encounter in the wild on the Internet. Your mind has been readied. Voila, you see a blog on the Internet that proffers a claimed fact that alien creatures from Mars are already here on earth and hiding in plain sight, but this seeming disinformation or misinformation is readily rejected by your mind due to the prior inoculation (well, then again, maybe it is truthful and they really are here!).

Anyway, I hope that you are able to discern now how it is that AI could help inoculate or immunize humans with respect to disinformation or misinformation.

Various AI apps are being devised that will perform as disinformation or misinformation inoculators. The AI might seek to provide inoculation that is broad and provides an overall semblance of immunization. AI could also be devised for more specific forms of inoculation. Furthermore, the AI can work on a personalized basis that is tuned to your particular needs or interests. Advanced AI in this space will also try to determine your tolerance level, mental absorption rate, retention capacity, and other factors when composing and presenting so-called immunization shots, as it were.

Seems quite handy.

AI As Dangerous Mind Games Player

AI used in this manner would at first glance seem quite handy (hey, I mentioned that just a second ago).

There is a slew of potential downsides and problems that are worrisome and perhaps frightful.

In my columns, I often discuss the dual-use capacities of AI, see for example the link here. AI can be a vital contributor to humankind. Alas, AI is also encumbered by lots of dangers and unfortunate pitfalls.

For the case of AI as an inculcator, let’s consider these demonstrative AI Ethics related issues:

  • Adverse reactions by humans
  • Non-responsive reactions by humans
  • AI mistargeting
  • AI under-targeting
  • Cyber Breach of the AI
  • Other

We will briefly explore those concerns.

Adverse Reactions By Humans

Suppose that a human receiving this kind of AI-based inoculation has an adverse reaction or produces an adverse effect.

The person might misconstrue the immunization and suddenly become unreceptive to any information they receive. They block off all information. The AI has somehow triggered them into tossing out the baby with the bathwater (an old saying, perhaps worth retiring). Rather than only trying to cope with disinformation and misinformation, the person has reacted by deciding that all information is always false.

I don’t think we want people to go that overboard.

There is a multitude of adverse reactions that AI might foster. This is partially due to how the AI attempted to perform the inoculation, but also we have to lay part of the issue at the feet of the human that received the inoculation. They might have reacted in wild or bizarre ways that others receiving the same AI inoculation did not do so.

Again, you can liken this to the analogy of inoculations for diseases.

In short, it will be important that when such AI efforts are utilized, they be done in responsible ways that seek to minimize adverse effects. There should also be a follow-on aspect of the AI to try and ascertain whether an adverse reaction has occurred. If there is a detected adverse reaction, the AI should be devised to try and aid the person in their adverse response and seek to overcome or alleviate the response.

Non-responsive Reactions By Humans

Another possibility is that the AI-fed inoculation has no impact on the receiving person.

A person gets an AI-based inoculation related to misinformation or disinformation. Whereas most people “get it” and become immunized, there are bound to be people that will not react at all. They learn nothing from the inoculation. They are unresponsive to the AI attempt at immunizing them for either all or certain types of misinformation or disinformation.

Once again, this is comparable to inoculations for diseases.

AI ought to be devised to contend with such a circumstance.

AI Mistargeting

Imagine that an AI is hoping to immunize people regarding a particular topic that we’ll say is topic X, but it turns out that topic Y is instead being covered. The AI is mistargeting.

This is a twofold problem. Firstly, topic X has not then been covered as the presumed and hoped-for purpose of the AI inoculator. Secondly, the topic Y is covered but we might not have wanted people to be immunized on that topic.

Oops.

Questions abound. Could this have been prevented from happening? If it happens, can we undo the topic of Y immunization? Can we seek to cover the topic X inoculation, or will the person be less receptive or non-receptive due to the mistargeting by the AI originally?

Lots of problematic concerns arise.

AI Under-Targeting

An AI provides an inoculation on the topic Z. The people receiving the inoculation seem to have a minimal or nearly negligible reaction. The inoculation was insufficient to take hold.

You might be tempted to quickly claim that this is easily resolved. All you have to do is repeat the inoculation. Maybe yes, maybe no.

The AI inoculation might be of such a limited value that no matter if you have people experience it a hundred times the result is still a marginal outcome. You might need to boost the inoculation rather than simply repeating it.

Meanwhile, imagine that an attempt is made to boost the AI-fed inoculation but this goes overboard. The boosted version causes hyper-reactions. Yikes, we have come from bad to worse.

Cyber Breach Of The AI

Envision that AI is being used extensively to aid people in being inoculated from disinformation and misinformation.

A general reliance takes hold by people. They know and expect that the AI is going to present them with snippets that will open their eyes to what is known as disinformation and misinformation.

All is well and good, it seems.

An evil-doer is somehow able to do a cyber breach on the AI. They sneakily force into the AI some desired disinformation that they want people to think is not disinformation. The AI is arranged to make the actual disinformation appear to be true information. As well, the true information is made to appear as disinformation.

People are completely snookered. They are being disinformed by AI. On top of that, because they had become dependent upon the AI, and due to trusting that the AI was doing the right thing, they fall hook, line, and sinker for this breached AI. Without hesitation.

Given how readily disinformation can further spread, the evil-doer might relish that the existence of this kind of AI is their easiest and fastest way to make their insidious lies go around the world. Ironically, of course, having leveraged the AI inoculator to essentially spread the disease.

Conclusion

Should we have AI be playing mind games with us?

Might AI for disinformation and misinformation inoculation be a menacing Trojan horse?

You can make a substantive case for worrying about such a nightmare.

Others scoff at such a possibility. People are smart enough to know when the AI attempts to trick them. People will not fall for such dribble. Only idiots would get themselves misled by such AI. Those are the usual retorts and counterarguments.

Not wanting to seem less than fully admiring of humans and human nature, I would merely suggest that there is ample indication that humans could fall for AI that misleads them.

There is an even greater issue that perhaps looms over all of this.

Who makes the AI and how does the AI algorithmically decide what is considered disinformation and misinformation?

An entire firefight is taking place today in the world at large about what constitutes specifically disinformation and misinformation. Some assert that facts are facts, thus there can never be any confusion over what proper information versus improper information is. The devil though at times is in the details, that’s for darned sure.

A final remark for now. Abraham Lincoln famously stated: “You can fool all the people some of the time and some of the people all the time, but you cannot fool all the people all the time.”

Will AI which is used for aiding the inoculation of humankind from disinformation and misinformation be a vital tool for ensuring that not all people can be fooled all of the time? Or might it be used to fool more of the people more of the time?

Time will tell.

And that’s assuredly no disinformation or misinformation.

Stay connected with us on social media platform for instant update click here to join our  Twitter, & Facebook

We are now on Telegram. Click here to join our channel (@TechiUpdate) and stay updated with the latest Technology headlines.

For all the latest Technology News Click Here 

Read original article here

Denial of responsibility! Rapidtelecast.com is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – abuse@rapidtelecast.com. The content will be deleted within 24 hours.
Exit mobile version