AI Ethics And AI Law Grappling With Overlapping And Conflicting Ethical Factors Within AI

0

Do two wrongs make a right?

Whichever way that you decide to answer that longstanding and vexing question, the odds are that you are stepping whole-hog into a thorny ethical debate.

Let’s see why.

Some insist that there is never a situation in which two wrongs make a right. Their logic usually is that a wrong is a wrong. Thus, two wrongs are ostensibly nothing more than two wrongs. End of story. Others counterargue that sometimes a wrong can be entirely justly met with another wrong, especially when the subsequent wrong is considered correcting of the prior wrong. In a sense, the wrongs cancel each other out and you end up with a semblance of a rightful outcome.

Round and round the often-heated discourse on two wrongs and the questionable matter of a rightful result takes place.

There are ethicists that would at times label the adage as a type of mental fallacy. If the two wrongs are only tangentially connectable, you would seem on thin ice to claim that one of the wrongs is stridently related to the other. Also, the finger-pointing at a prior wrong might be done simply as a distractor and a desperate attempt to avoid getting pinned on a later committed wrong. This is the proverbial use of a red herring.

A dizzying conundrum, that’s for sure.

Maybe it will help for us to mull over some specific examples.

You are caught speeding on a well-used highway. Darn it, speeding tickets are costly and also add onerously adverse points on your official driving record. Your insurance premiums will probably radically go up too. After thinking quite a bit about this exasperating and altogether annoying traffic ticket, you make the difficult decision to fight the pesky thing. Off to court, you go.

Standing there in a somber courtroom, you have your moment in the sun to explain what you were doing. Hopefully, the judge will see the world through your eyes and toss out the seemingly unwarranted and unjustified speeding ticket (as per your view on the matter).

You stoically explain that there were lots of other cars speeding on the highway. None of those drivers received speeding tickets. Since they didn’t get speeding tickets, it only seems fair and reasonable that you should have your speeding ticket canceled.

Two wrongs make a right.

That’s the totality of your proffered argument.

Does the judge buy into your logic and let you off the hook?

Generally, the odds would seem low that your contention will prevail in this instance. The stern answer by the judge might be that two wrongs don’t make a right. Your ticket stands as is. Next case!

Let’s help out by reshaping your argument.

The new explanation is this. There were lots of other cars speeding on the highway. You were essentially forced into going at the same speed as those other cars. Had you gone slower, the chances were high that your lessened speed might have led to a car crash, during which people could be seriously injured or killed. To try and drive safely, thoughtfully aiming to avoid a calamitous car crash, you had no choice but to drive at the prevailing speed at the time.

Yes, once again, two wrongs make a right, but this angle differs from the prior elucidation.

Will the judge see the light and dismiss your speeding ticket?

It would seem like this is a more compelling argument and has a better chance of succeeding.

We’ll try one more variant and see how else this might be couched.

Pretend that we are starting the speeding ticket courtroom story all over again. Start fresh. You tell the following tale to the attentive judge. There was a drunk driver on the highway. You saw that the errant driver was weaving perilously. Upon approaching this imminent danger, you decided to momentarily speed up to get safely past the inebriated driver. Had you not done so, the likelihood was that the wanton driver would have veered into your car, killing you and themselves too. The traffic officer stopped you, and completely missed stopping the drunk driver, which sadly and lamentedly is a rather disconcerting and disturbing twist of justice or injustice, as you see it.

Two wrongs do in fact make a right, you respectfully urge.

Maybe this is convincing, maybe not.

The thing is, the overarching mindful contention that two wrongs make a right is a powerful form of human thinking and some would say a potent kind of mental bias. As a persuasive tool, it can be used to either clarify matters or confound matters. The mind-bending construct appears to invoke ethical keystones. If you believe that ethical behavior consists of doing the right thing, perhaps the right thing to do was perform a so-called wrong to effectuate a right. Of course, the extremism of how far we can stretch the proverb is open to serious doubt.

For example, suppose that when trying to talk your way out of the traffic ticket, you explained that you were speeding because you had heard on the news that outer space aliens were on their way to take over earth (this was indeed a reported claim, according to recent news reports of someone that was stopped for speeding). This seems quite zany. Probably not a wise move to offer this postulated perspective.

One perhaps notable aspect about this discussion on wrongs and rights is that I’ve been using an example entailing speeding tickets, which takes us into the legal realm. You might be tempted to say that speeding is a simple legal matter. Either someone was speeding, or they were not speeding. If they were speeding, they deserve to get the ticket and take their lumps accordingly. If they were not speeding, perhaps the radar used to track their speed was defective, and the person ergo ought to be let free since the speeding ticket was erroneously assigned.

Is the law really that cut and dried?

In California, the official driving law pertaining to speeding is known as CVC 22350 and says this: “No person shall drive a vehicle upon a highway at a speed greater than is reasonable or prudent having due regard for weather, visibility, the traffic on, and the surface and width of, the highway, and in no event at a speed which endangers the safety of persons or property.”

By closely examining the words of that law, I think you will quickly realize that speeding is not necessarily as cut and dried as might be otherwise assumed. Suppose that a driver is rushing to the hospital due to a medical emergency taking place inside the car. We might allow that this is legally okay as long as the speeding did not excessively endanger the safety of others. There is a lot of giving and taking in something as seemingly simple as the act of speeding. Extenuating circumstances can be a huge differentiator.

All told, I hope that this showcases that thorny ethical matters and even supposedly cut-and-dried legal matters are often imbued with all sorts of complications. There can be a multitude of factors that come to play in an ethical setting. For those that cling to an abstract simplified notion of doing the right thing versus doing the wrong thing, the reality of daily life perhaps blurs what the right thing and the wrong thing consist of, depending upon prevailing cultural norms and a slew of other considerations (well, some ethicists would likely argue that there are absolutes that exist in the right versus the wrong conundrum, but that’s something I’ve covered elsewhere in my writings and I won’t get into herein).

Shifting gears, how do we make tough choices, and what is the nature of our decision-making process?

Researchers often describe our decision-making as embodying a litany of lexicographic choices, as articulated by this indication: “People accommodate their inability to make a great number of choices by making lexicographic choices. That is, they ignore the information about most facets of the object of their choice, and focus on a few that they hold to be most important. Thus, when they buy a car, they may examine its relative price, miles per gallon, and color, or some other such mix of features—but ignore scores of other attributes. The same holds true of moral choices. People, when making a major donation, may take into account the goals the given charity serves, whether it services people in their own community or overseas, and whether it has a reputation as an honest agency, or some other such mix. They will ignore, in the process, many other features of the given charity such as its long-term record, recent changes in leadership, its ratio of expenses to payouts, and so on” (as mentioned in “AI Assisted Ethics” by Amitai Etzioni and Oren Etzioni, Ethics And Information Technology).

Returning to the speeding driving situation, someone that is urgently driving to a hospital might entirely focus on their own urgency and downplay or ignore that they endangered other cars while driving at high speeds. The person might not be doing this deviously. They could genuinely believe that their own course of action was fully justified and be oblivious to the other possible consequences or spillover from their behavior.

You could persuasively contend that ethical thinking is replete with a multitude of factors. It is a decidedly multi-dimensional form of problem-solving. For just about any real-world ethical setting that you might devise or describe, the likelihood is that numerous factors come to play under-the-hood as it were.

I’ve tried to lead you toward an important theme.

Ethical considerations cannot usually be ascertained solely by using a singular factor. A multitude of factors is more so the real-world context. That being said, there is little doubt that some factors are more vital than other factors. Thus, not all factors are necessarily equal in weight or significance. Nonetheless, rarely can you swipe away the other factors and merely focus exclusively on one alone.

Why all this fuss about ethics and the multi-dimensional elements involved?

Because we need to have a serious chat about AI Ethics.

You might be aware that there is a rising interest in AI Ethics. And for good reasons. The essence is that we are pell-mell rushing forward with all manner of AI systems that are becoming integral to our daily lives. Initially, many referred to this as AI For Good, meaning that we could use modern-day AI to aid in solving the world’s toughest problems. Meanwhile, we have come to see that some AI systems are replete with untoward biases and inequities. This is generally referred to as AI For Bad.

A type of tension is mounting of efforts to produce and promulgate AI that is ostensibly AI For Good and simultaneously stop or catch the AI For Bad. Realize that any given AI system can be both at the same time. An AI system that enables more people to get say mortgage loans for housing would seem to be an AI For Good instance, though if the AI embeds racial or gender biases in making the loan decisions we would abundantly seem to agree that this is also AI For Bad. The pervasive emergence of algorithm decision making (ADM) as based on the latest in AI has put us all on edge about what is really going on inside the computational decision-making of the AI.

Efforts to clarify what we want AI to be or become consist of stipulating key AI Ethics principles. The hope is that by abiding by those cornerstone precepts, we will lean into AI For Good and shy away from AI For Bad. Notably, this leaning into and away from is going to be harder than it might seem at first glance. For my extensive coverage on AI Ethics, see the link here and the link here, just to name a few.

You might not be familiar with the AI Ethics principles that are being floated here and there. Let’s take a moment to briefly consider some of the key precepts.

As stated by the Vatican in the Rome Call For AI Ethics and as I’ve covered in-depth at the link here, these are their identified six primary AI ethics principles:

  • Transparency: In principle, AI systems must be explainable
  • Inclusion: The needs of all human beings must be taken into consideration so that everyone can benefit, and all individuals can be offered the best possible conditions to express themselves and develop
  • Responsibility: Those who design and deploy the use of AI must proceed with responsibility and transparency
  • Impartiality: Do not create or act according to bias, thus safeguarding fairness and human dignity
  • Reliability: AI systems must be able to work reliably
  • Security and privacy: AI systems must work securely and respect the privacy of users.

As stated by the U.S. Department of Defense (DoD) in their Ethical Principles For The Use Of Artificial Intelligence and as I’ve covered in-depth at the link here, these are their six primary AI ethics principles:

  • Responsible: DoD personnel will exercise appropriate levels of judgment and care, while remaining responsible for the development, deployment, and use of AI capabilities.
  • Equitable: The Department will take deliberate steps to minimize unintended bias in AI capabilities.
  • Traceable: The Department’s AI capabilities will be developed and deployed such that relevant personnel possess an appropriate understanding of the technology, development processes, and operational methods applicable to AI capabilities, including with transparent and auditable methodologies, data sources, and design procedure and documentation.
  • Reliable: The Department’s AI capabilities will have explicit, well-defined uses, and the safety, security, and effectiveness of such capabilities will be subject to testing and assurance within those defined uses across their entire lifecycles.
  • Governable: The Department will design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences, and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior.

I’ve also discussed various collective analyses of AI ethics principles, including having covered a set devised by researchers that examined and condensed the essence of numerous national and international AI ethics tenets in a paper entitled “The Global Landscape Of AI Ethics Guidelines” (published in Nature), and that my coverage explores at the link here, which led to this keystone list:

  • Transparency
  • Justice & Fairness
  • Non-Maleficence
  • Responsibility
  • Privacy
  • Beneficence
  • Freedom & Autonomy
  • Trust
  • Sustainability
  • Dignity
  • Solidarity

As you might directly guess, trying to pin down the specifics underlying these principles can be extremely hard to do. Even more so, the effort to turn those broad principles into something entirely tangible and detailed enough to be used when crafting AI systems is also a tough nut to crack. It is easy to overall do some handwaving about what AI Ethics precepts are and how they should be generally observed, while it is a much more complicated situation upon the AI coding having to be the veritable rubber that meets the road.

Take for instance the concept of imbuing fairness into an AI system.

Sure, it would seem readily apparent that we want AI systems to be fair. Any counter-argument is going to seemingly fall flat. I ask you; can you define fairness in a concrete way that can be fully articulated and ultimately coded into a computer system for use within an AI system?

That’s a tall order.

Researchers are trying mightily to probe and engineer these AI ethics precepts, but roadblocks continue to stymy such efforts: “The plethora of fairness metric definitions illustrates that fairness cannot be reduced to a concise mathematical definition. Fairness is dynamic, social in nature, application, and context-specific, and not just an abstract or universal statistical problem. Therefore, it is important to adopt a socio-technical approach to fairness in order to have realistic fairness definitions for different contexts as well as task specific datasets for machine learning model development and evaluation” (as stated in “AI Assisted Ethics” by Amitai Etzioni and Oren Etzioni, Ethics And Information Technology).

In case you are wondering why “fairness” would be complicated, consider some additional insights by those same researchers about the hurdles facing defining and coding up of fairness: “These metrics can be classified into many categories: fairness through unawareness, individual fairness, demographic parity, disparate impact, differential validity, proxy discrimination, equality of opportunity, etc. However, not all critically important lines of inquiry can be answered through observations alone. Moreover, depending on the relationship between a protected attribute and the data, certain observational definitions of fairness can increase discrimination. Hence, research to improve fairness metrics continues.”

Okay, it would seem apparent that fairness is not so easy to devise and construct. The thing is, the same can be said of the other AI Ethics principles too, such as the figuring out of transparency, justice, non-maleficence, responsibility, privacy, beneficence, freedom and autonomy, trust, sustainability, dignity, and solidarity. It is a multitude of factors, all of which are essentially ambiguous and need to be turned into being unambiguousness when serving as guardrails for AI.

Not wanting to further stoke this hornet’s nest, we must though also take note that the multitude of factors can run afoul of each other. There are circumstances where seeking to attain one of the factors can end up conflicting with one or more of the other factors. You might devise an AI system that can be considered as being fair, but suppose that the only viable means of doing so sacrifices the desire to have transparency? This is a real problem today facing many of the AI systems that incorporate Machine Learning (ML) and Deep Learning (DL), as I’ve analyzed at the link here.

In short, human ethical thinking is rife with considering a multitude of factors when making ethical or moral decisions. We are striving toward crafting AI systems that make decisions using ADM (algorithm decision making). There is a sensible basis for wanting the AI to abide by ethical precepts, such as the ones mentioned earlier herein.

Trying to reduce the AI Ethics precepts to one single precept is not particularly helpful. Doing so might simplify the crafting of the AI, but in the end, the AI then undercuts the other AI Ethics precepts. The AI might be touted as attaining one of the principles, meanwhile gloomily doing poorly on the others. This would not seem satisfying for society.

Be cautious when a company announces they have devised an AI system that is for example allegedly fair. Besides questioning how such an attestation can be made, the AI at the same time might be badly enmeshed with a variety of other ills and have completely averted transparency, non-maleficence, responsibility, privacy, etc. Do not be fooled by twitter-sized headline-grabbing claims that distract from the myriad of other AI Ethics principles that should also be observed.

Envision that a corporation proudly proclaims that it has an AI-based mortgage loan granting system that is absolutely fair in the loan selection process. Turns out, unbeknownst to those using the AI system, privacy went out the window and the firm is making use of the collected data about the loan applicants in the most egregious of ways. Fairness was supposedly achieved at the sacrifice of privacy, let’s say.

Does one right and one wrong make a right?

Extending the example, suppose we find out that the AI-based loan granting app is not actually fair anyway. The AI has hidden biases that only became apparent after many thousands of loans were granted and many thousands of loan applicants were turned down. Oops, we have an AI system that isn’t fair and also isn’t protecting privacy. The company insists that at least their AI system is no worse than prior efforts that used human loan agents to make the decisions. In fact, they point out that the AI system is faster and gives answers about loan requests in seconds rather than having to wait perhaps days when relying on human agents.

Aha, two wrongs make a right!

We have come full circle. In the case of a traffic ticket for speeding, we had the classic claim that since others were wrongfully speeding, the driver nabbed with a ticket ought to be let off the hook. Two wrongs make a right. Now, in the case of loan granting, we have the claim that since human loan agents are seeming “wrong” in a manner of speaking, we are asked to look the other way when an AI system granting loans is also considered wrong in how it works.

Two wrongs make a right?

Makes your head spin.

At this juncture of this discussion, I’d bet that you are desirous of some additional real-world examples that could highlight the multi-dimensional facets of AI Ethics and Ethical AI.

There is a special and assuredly popular set of examples that are close to my heart. You see, in my capacity as an expert on AI including the ethical and legal ramifications, I am frequently asked to identify realistic examples that showcase AI Ethics dilemmas so that the somewhat theoretical nature of the topic can be more readily grasped. One of the most evocative areas that vividly presents this ethical AI quandary is the advent of AI-based true self-driving cars. This will serve as a handy use case or exemplar for ample discussion on the topic.

Here’s then a noteworthy question that is worth contemplating: Does the advent of AI-based true self-driving cars illuminate anything the multi-dimensionality of AI Ethics, and if so, what does this showcase?

Allow me a moment to unpack the question.

First, note that there isn’t a human driver involved in a true self-driving car. Keep in mind that true self-driving cars are driven via an AI driving system. There isn’t a need for a human driver at the wheel, nor is there a provision for a human to drive the vehicle. For my extensive and ongoing coverage of Autonomous Vehicles (AVs) and especially self-driving cars, see the link here.

I’d like to further clarify what is meant when I refer to true self-driving cars.

Understanding The Levels Of Self-Driving Cars

As a clarification, true self-driving cars are ones that the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.

These driverless vehicles are considered Level 4 and Level 5 (see my explanation at this link here), while a car that requires a human driver to co-share the driving effort is usually considered at Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-ons that are referred to as ADAS (Advanced Driver-Assistance Systems).

There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there.

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend, see my coverage at this link here).

Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different than driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).

For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.

Self-Driving Cars And Multi-Dimensionality Of Ethical AI

For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task.

All occupants will be passengers.

The AI is doing the driving.

One aspect to immediately discuss entails the fact that the AI involved in today’s AI driving systems is not sentient. In other words, the AI is altogether a collective of computer-based programming and algorithms, and most assuredly not able to reason in the same manner that humans can.

Why is this added emphasis about the AI not being sentient?

Because I want to underscore that when discussing the role of the AI driving system, I am not ascribing human qualities to the AI. Please be aware that there is an ongoing and dangerous tendency these days to anthropomorphize AI. In essence, people are assigning human-like sentience to today’s AI, despite the undeniable and inarguable fact that no such AI exists as yet.

With that clarification, you can envision that the AI driving system won’t natively somehow “know” about the facets of driving. Driving and all that it entails will need to be programmed as part of the hardware and software of the self-driving car.

Let’s dive into the myriad of aspects that come to play on this topic.

First, it is important to realize that not all AI self-driving cars are the same. Each automaker and self-driving tech firm is taking its approach to devising self-driving cars. As such, it is difficult to make sweeping statements about what AI driving systems will do or not do.

Furthermore, whenever stating that an AI driving system doesn’t do some particular thing, this can, later on, be overtaken by developers that in fact program the computer to do that very thing. Step by step, AI driving systems are being gradually improved and extended. An existing limitation today might no longer exist in a future iteration or version of the system.

I trust that provides a sufficient litany of caveats to underlie what I am about to relate.

We are primed now to do a deep dive into self-driving cars and Ethical AI possibilities entailing the emphasis on the multi-dimensionality of AI Ethics.

Let’s use a readily straightforward example. An AI-based self-driving car is underway on your neighborhood streets and seems to be driving safely. At first, you had devoted special attention to each time that you managed to catch a glimpse of the self-driving car. The autonomous vehicle stood out with its rack of electronic sensors that included video cameras, radar units, LIDAR devices, and the like. After many weeks of the self-driving car cruising around your community, you now barely notice it. As far as you are concerned, it is merely another car on the already busy public roadways.

Lest you think it is impossible or implausible to become familiar with seeing self-driving cars, I’ve written frequently about how the locales that are within the scope of self-driving car tryouts have gradually gotten used to seeing the spruced-up vehicles, see my analysis at this link here. Many of the locals eventually shifted from mouth-gaping rapt gawking to now emitting an expansive yawn of boredom to witness those meandering self-driving cars.

Probably the main reason right now that they might notice the autonomous vehicles is because of the irritation and exasperation factor. The by-the-book AI driving systems make sure the cars are obeying all speed limits and rules of the road. For hectic human drivers in their traditional human-driven cars, you get irked at times when stuck behind the strictly law-abiding AI-based self-driving cars.

That’s something we might all need to get accustomed to, rightfully or wrongly.

Back to our tale. One day, suppose a self-driving car in your town or city is driving along and opts to exceed the speed limit.

Say what?

Yes, the always legally doting AI self-driving car went faster than the posted speed limit. Someone managed to catch this occurrence on their cellphone. The video was promptly shared via social media and became a viral sensation. The AI that was stridently obeying the laws had opted to violate the law.

Online responses were varied, as you might imagine. Is AI above the law? If so, what might happen next? Perhaps AI will decide to overtake humanity. We will be crushed by lawless AI. The dreaded slippery slope of humans creating AI and then being torn asunder by AI will finally happen. An existential risk was showcased to us, as plain as day, on the day that the AI self-driving car first went over the speed limit. For my discussion about AI criminality and accountability, see the link here.

In any case, hopefully, not everyone will react in a similar vein. Let’s trust that humankind will take a calmer view on such matters.

We will take as an undisputed fact that the self-driving car went faster than the speed limit. Assume that even the automaker and the self-driving tech maker all agree that the AI driving system led the autonomous vehicle to break the law. Public outrage insists that the self-driving car be issued a violation citation of a traffic ticket for speeding. Give that lawbreaker AI a driving infraction. Maybe the AI needs to be sent to driving school just like the rest of us that deal with speeding tickets.

Time to invoke a handy-dandy proverb.

Are you ready?

Humans in your city or town go faster than the speed limit, doing so quite frequently. This is a known fact. They aren’t getting tickets. The AI was merely doing what the human drivers do.

Conclusion: Two wrongs make a right.

Imagine that the lawyers for the automaker and self-driving tech firm are standing in a courtroom and making that kind of plea to the judge. Your honor, our AI was merely driving as humans drive. If you aren’t going to give out hundreds or maybe thousands of speeding tickets to human speeders, you shouldn’t penalize AI for its similar actions. We rest our case.

Maybe this is a convincing argument or maybe not.

We can further contemplate the matter. Suppose that the AI had a bug or error in it. The circumstance of going faster than the posted speed limit was due to the error in the coding. It only happened once. The AI developers dug into the code and found the error. They have corrected the error. As far as they can discern, the self-driving car will never go over the speed limit again.

Should the automaker or self-driving tech firm fess up and reveal that the excessive speed was due to an error in their AI system? You might argue that they should do so, perhaps it is the right thing to do on an ethical basis. Be transparent about what the AI consists of.

The worry though by the automaker and self-driving tech firm might be that if people know one bug arose, panic will ensue and people might start to become afraid that more bugs are bound to be hidden within the AI. This could undermine faith in self-driving cars. People might not want to ride in them anymore. The entirety of the pursuit of AI-based self-driving cars tumbles to the ground.

Transparency is no free lunch.

We can make the story gloomier in a sense.

Scratch from your mind that the AI had a bug in it. Start fresh.

Suppose the AI was devised using Machine Learning and Deep Learning, a form of computational pattern matching. The ML/DL can end up being extremely arcane in a mathematical way. An ongoing and troubling qualm about many of today’s ML/DL is that there is often no logically explainable means to get the AI system to identify how it arrived at a given decision. This area of AI is known as XAI (explainable AI) and there is a lot of pressure to try and ensure that AI systems can explain how they arrived at an answer or outcome, see my coverage at the link here.

Consider this. The AI developers dig into the ML/DL of the AI driving system and are unable to ferret out why the AI opted to have the autonomous vehicle go faster than the speed limit. It is baffling. Was it a bug? Was it working as designed? They don’t know. Unfortunately, as a result of the computational encoding, there really doesn’t seem to be a means to figure out what prompted the AI to do this.

Whereas a moment ago we were discussing transparency in terms of revealing that the AI had a bug, now the issue is that there isn’t any transparency available at all. The automaker and the self-driving tech firm are in the dark about what occurred. They couldn’t tell you even if they wanted to do so (of course, this too becomes a matter of transparency, specifically revealing that they don’t know what caused the speeding, and their efforts to discover the basis was unrevealing).

Should we let the AI off the hook?

That is a bit misleading since the AI isn’t sentient and we ought to instead be asking whether we should let the automaker and the self-driving tech firm off the hook. They devised the AI driving system. They are the ones that put self-driving cars onto our public streets. Humans are responsible and they are the ones that need to be held accountable.

Conclusion

We earlier took a look at some of the AI Ethics principles that are today’s foundational set of guidelines for AI development, AI fielding, and AI usage.

As a reminder, one such set consists of:

  • Transparency
  • Justice & Fairness
  • Non-Maleficence
  • Responsibility
  • Privacy
  • Beneficence
  • Freedom & Autonomy
  • Trust
  • Sustainability
  • Dignity
  • Solidarity

In the use case of AI-based self-driving cars, are there tradeoffs amongst those AI Ethics precepts that you are willing to allow in order to presumably more expeditiously arrive at self-driving cars?

One argument is that it is reasonable to cut short some of those factors to try and attain AI for autonomous vehicles. The logic is that in the United States alone, there are about 40,000 human fatalities per year due to car crashes and around 2.5 million injuries (see more stats at the link here). The sooner we can get self-driving cars underway, the sooner we can reduce those ghastly numbers.

You might say that ethically it makes sense to shave some corners to save lives and diminish injuries.

Which of the factors are you willing to undercut?

Alright, you might say, give up on transparency to get AI that will save lives. Maybe the same can be said for sustainability. Perhaps the same for privacy. How far are you willing to go?

We need to be asking these same questions about all AI systems.

I’ve used the exemplar of AI driving systems to illustrate this major theme about AI and AI Ethics. The adoption of AI is a multifold ethics factors problem. Move beyond the AI of self-driving cars and start asking the same multi-dimensional questions about the AI that exists in systems all around you.

As you do so, there might be a tiny voice in the recesses of your mind that says this, namely whether two wrongs make a right or whether two wrongs don’t make a right.

We’d better pick the right answer.

Stay connected with us on social media platform for instant update click here to join our  Twitter, & Facebook

We are now on Telegram. Click here to join our channel (@TechiUpdate) and stay updated with the latest Technology headlines.

For all the latest Technology News Click Here 

Read original article here

Denial of responsibility! Rapidtelecast.com is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – [email protected]. The content will be deleted within 24 hours.
Leave a comment