Be forewarned, there are more than you can shake a stick at.
Yes, indeed, there are so many AI Ethics guidelines popping up these days that you are perfectly right to be slightly vexed and ostensibly confused. What is going on, you might be wondering. The good news is that this is a healthy sign and something we can generally herald and cherish. Of course, there can sometimes be too much of a good thing, which I’ll explain why in a moment.
First, we do in fact earnestly need a kind of NorthStar as it were of AI Ethics to aid in providing guardrails and insights for the advent of AI.
You might be aware that when the latest era of AI got underway there was a huge burst of enthusiasm for what some now call AI For Good. Unfortunately, on the heels of that gushing excitement, we began to witness AI For Bad. For example, various AI-based facial recognition systems have been revealed as containing racial biases and gender biases, which I’ve discussed at the link here.
Efforts to fight back against AI For Bad are actively underway. Besides vociferous legal pursuits of reining in the wrongdoing, there is also a substantive push toward embracing AI Ethics to righten the AI vileness. The notion is that we ought to adopt and endorse key Ethical AI principles for the development and fielding of AI doing so to undercut the AI For Bad and simultaneously heralding and promoting the preferable AI For Good.
On a related notion, I am an advocate of trying to use AI as part of the solution to AI woes, fighting fire with fire in that manner of thinking. We might for example embed Ethical AI components into an AI system that will monitor how the rest of the AI is doing things and thus potentially catch in real-time any discriminatory efforts, see my discussion at the link here. We could also have a separate AI system that acts as a type of AI Ethics monitor. The AI system serves as an overseer to track and detect when another AI is going into the unethical abyss (see my analysis of such capabilities at the link here).
In a moment, I’ll share with you some overarching principles underlying AI Ethics. There are lots of these kinds of lists floating around here and there. You could say that there isn’t as yet a singular list of collective appeal and unmitigated concurrence (well, maybe kind of there is, which I’ll be focusing on herein). The good news either way is that at least there are readily available AI Ethics lists and they tend to be quite similar. All told, this suggests that by a form of reasoned convergence of sorts that we are finding our way toward a general commonality of what AI Ethics consists of. For my ongoing and extensive coverage of AI Ethics and Ethical AI, see the link here and the link here, just to name a few.
One of my favorite AI Ethics lists was released last year by the United Nations Educational, Scientific, and Cultural Organization (UNESCO). I would say it is appropriate and fair to express that this was a historic moment since the set of AI Ethics presented was adopted by nearly 200 member countries of the UN. As aptly stated by UNESCO about the dangers of AI: “We see increased gender and ethnic bias, significant threats to privacy, dignity and agency, dangers of mass surveillance, and increased use of unreliable Artificial Intelligence technologies in law enforcement, to name a few. Until now, there were no universal standards to provide an answer to these issues” (per the UNESCO posting entitled “193 Countries Adopt First-Ever Global Agreement On The Ethics Of Artificial Intelligence” on November 25, 2021).
You could reasonably argue that this AI Ethics elicitation by UNESCO is the closest thing that we have toward a truly universal and all-agreed Ethical AI stipulation. Some though prefer to describe the listing as a global normative framework. To them, the listing is useful as a backdrop, but it is not what they necessarily themselves wish to directly utilize per se.
Why so?
One viewpoint is that the UNESCO AI Ethics list is overly complicated and not easily digested (some would harshly say it uses the contortionist bloated language of diplomacy). As such, some have crafted their own Ethical AI listings and borrowed or aligned such declarations extensively with the UNESCO version. Others had already created their AI Ethics guidelines before the UNESCO release, and upon close inspection decided that they were sufficiently in comport with the UNESCO set, thus there was no need to change their preexisting proprietary approach.
Frankly, some simply have that veritable Not Invented Here (NIH) mindset and have cantankerously opted to start from scratch when devising their AI Ethics principles. Or, perhaps beneficially, they want to feel a sense of pride and ownership by making their Ethical AI precepts specific to themselves. Another angle is that some pick an AI Ethics listing that has become popular in their industry or their particular niche, for which UNESCO is perhaps more broadly aimed.
And so on.
Those that are newbies to the AI Ethics realm are frequently unaware that the UNESCO AI Ethics listing exists. That’s a shame. That’s a darned shame. I hope to right that “wrong” by providing you herein with a tasty sampling of what the UNESCO agreement contains. Perhaps doing so will further whet your appetite to look into the AI Ethics arena all told.
For cynics among you that blather about AI Ethics lists as being nothing more than words on paper, I would fervently assert that we do need words on paper. Without some semblance of a structure or a plan, we are going to be forever lost amongst an increasingly vast and potentially endangering forest of untoward AI systems.
I relish a key quote by UNESCO Chief Audrey Azoulay that said this when the UNESCO AI Ethics principles were finally approved and published: “The world needs rules for artificial intelligence to benefit humanity. The Recommendation on the Ethics of AI is a major answer. It sets the first global normative framework while giving States the responsibility to apply it at their level. UNESCO will support its 193 Member states in its implementation and ask them to report regularly on their progress and practice” (as stated in the same UNESCO blog post noted above).
Let’s cover briefly some of the overall Ethical AI precepts that I’ve previously discussed in my columns to herein illustrate what ought to be a vital consideration for anyone and everyone that is crafting, fielding, or using AI. We’ll then take a peek at the UNESCO AI Ethics precepts.
As stated by the Vatican in the Rome Call For AI Ethics and as I’ve covered at the link here, these are their identified six primary AI ethics principles:
- Transparency: In principle, AI systems must be explainable.
- Inclusion: The needs of all human beings must be taken into consideration so that everyone can benefit, and all individuals can be offered the best possible conditions to express themselves and develop.
- Responsibility: Those who design and deploy the use of AI must proceed with responsibility and transparency.
- Impartiality: Do not create or act according to bias, thus safeguarding fairness and human dignity.
- Reliability: AI systems must be able to work reliably.
- Security and privacy: AI systems must work securely and respect the privacy of users.
As stated by the U.S. Department of Defense (DoD) in their Ethical Principles For The Use Of Artificial Intelligence and as I’ve covered at the link here, these are their six primary AI ethics principles:
- Responsible: DoD personnel will exercise appropriate levels of judgment and care while remaining responsible for the development, deployment, and use of AI capabilities.
- Equitable: The Department will take deliberate steps to minimize unintended bias in AI capabilities.
- Traceable: The Department’s AI capabilities will be developed and deployed such that relevant personnel possesses an appropriate understanding of the technology, development processes, and operational methods applicable to AI capabilities, including with transparent and auditable methodologies, data sources, and design procedure and documentation.
- Reliable: The Department’s AI capabilities will have explicit, well-defined uses, and the safety, security, and effectiveness of such capabilities will be subject to testing and assurance within those defined uses across their entire lifecycles.
- Governable: The Department will design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences, and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior.
I’ve also discussed various collective analyses of AI ethics principles, including having covered a set devised by researchers that examined and condensed the essence of numerous national and international AI ethics tenets in a paper entitled “The Global Landscape Of AI Ethics Guidelines” (published in Nature), and that my coverage explores at the link here, which led to this keystone list:
- Transparency
- Justice & Fairness
- Non-Maleficence
- Responsibility
- Privacy
- Beneficence
- Freedom & Autonomy
- Trust
- Sustainability
- Dignity
- Solidarity
As you might directly guess, trying to pin down the specifics underlying these principles can be extremely hard to do. Even more so, the effort to turn those broad principles into something entirely tangible and detailed enough to be used when crafting AI systems is also a tough nut to crack. It is easy to overall do some handwaving about what AI Ethics precepts are and how they should be generally observed, while it is a much more complicated situation in the AI coding having to be the veritable rubber that meets the road.
The AI Ethics principles are to be utilized by AI developers, along with those that manage AI development efforts, and even those that ultimately field and perform upkeep on AI systems. All stakeholders throughout the entire AI life cycle of development and usage are considered within the scope of abiding by the being-established norms of Ethical AI. This is an important highlight since the usual assumption is that “only coders” or those that program the AI is subject to adhering to the AI Ethics notions. As earlier stated, it takes a village to devise and field AI, and for which the entire village has to be versed in and abide by AI Ethics precepts.
Now that you are oriented toward the overall nature of AI Ethics precepts, let’s next dig into the UNESCO formulation of AI Ethics principles.
Many lists of AI Ethics will start with a preamble that sets the stage for why the list is needed. The preamble is usually relatively short and just emphasizes that we need to be aiming toward producing AI that abides by human ethical values. As you might imagine, the UNESCO listing takes the concept of this to a whole higher level of embellishment.
Consider this rather grand opening remark of the UNESCO document: “Recognizing the profound and dynamic positive and negative impacts of artificial intelligence (AI) on societies, environment, ecosystems and human lives, including the human mind, in part because of the new ways in which its use influences human thinking, interaction and decision-making and affects education, human, social and natural sciences, culture, and communication and information,” and furthermore, “Considering that AI technologies can be of great service to humanity and all countries can benefit from them, but also raise fundamental ethical concerns, for instance regarding the biases they can embed and exacerbate, potentially resulting in discrimination, inequality, digital divides, exclusion and a threat to cultural, social and biological diversity and social or economic divides; the need for transparency and understandability of the workings of algorithms and the data with which they have been trained; and their potential impact on, including but not limited to, human dignity, human rights and fundamental freedoms, gender equality, democracy, social, economic, political and cultural processes, scientific and engineering practices, animal welfare, and the environment and ecosystems” – they then proceed to layout the core AI Ethics precepts.
I don’t want to belabor this steeply elaborated wording, though I do have to admit that this additional passage hits home for those that are keenly immersed in AI Ethics considerations, namely that their AI Ethics guidelines “approaches AI ethics as a systematic normative reflection, based on a holistic, comprehensive, multicultural and evolving framework of interdependent values, principles and actions that can guide societies in dealing responsibly with the known and unknown impacts of AI technologies on human beings, societies and the environment and ecosystems, and offers them a basis to accept or reject AI technologies. It considers ethics as a dynamic basis for the normative evaluation and guidance of AI technologies, referring to human dignity, well-being and the prevention of harm as a compass and as rooted in the ethics of science and technology.”
A mouthful, filled with topnotch nourishment.
Moving on, here are the stated objectives or goals associated with the UNESCO AI Ethics precepts (quoting from the UNESCO document):
- To provide a universal framework of values, principles and actions to guide States in the formulation of their legislation, policies or other instruments regarding AI, consistent with international law;
- To guide the actions of individuals, groups, communities, institutions and private sector companies to ensure the embedding of ethics in all stages of the AI system life cycle;
- To protect, promote and respect human rights and fundamental freedoms, human dignity and equality, including gender equality; to safeguard the interests of present and future generations; to preserve the environment, biodiversity and ecosystems; and to respect cultural diversity in all stages of the AI system life cycle;
- To foster multi-stakeholder, multidisciplinary and pluralistic dialogue and consensus-building about ethical issues relating to AI systems;
- To promote equitable access to developments and knowledge in the field of AI and the sharing of benefits, with particular attention to the needs and contributions of LMICs, including LDCs, LLDCs and SIDS.
There is a bit of UN jargon contained within the document, such as the reference in that last bulleted point to LMICs (low- and middle-income countries), LDCs (least developed countries), LLDCs (landlocked developing countries), and SIDS (small island developing States). Do not let that dissuade you from reading and garnering value from the UNESCO material. Plus, snappily, we might as well all become familiar with the language of global finesse anyway, one might so advise.
Following the lengthy preamble, the initial list-oriented portion of the UNESCO document discusses the key values underlying the AI Ethics guidelines and consists of twenty-four pronouncements that are organized into these four categories:
Values
- Respect, protection and promotion of human rights and fundamental freedoms and human dignity
- Environment and ecosystem flourishing
- Ensuring diversity and inclusiveness
- Living in peaceful, just and interconnected societies
The next segment discusses the key principles underlying the AI Ethics guidelines and consists of nearly twenty-five pronouncements that are organized into these ten categories:
Principles
1) Proportionality and Do No Harm
2) Safety and Security
3) Fairness and Non-discrimination
4) Sustainability
5) Rights to Privacy and Data Protection
6) Human Oversight and Determination
7) Transparency and Explainability
8) Responsibility and Accountability
9) Awareness and Literacy
10) Multi-Stakeholder and Adaptive Governance and Collaboration
After covering those values and principles, the next section entails areas of policy action.
The notion of the policy actions is that advice is being given regarding how to try and operationalize the set of values and principles. I favor this kind of added material in any proffered AI Ethics precepts. Some of the Ethical AI lists are solely a dry and stark listing and lack any indication or recommendation about how to turn those cornerstone precepts into something tangibly workable. Regrettably, without a sense of operationalization, those that come upon an AI Ethics listing are bound to be unsure of what to do with the list.
AI Ethics has to be both words and deeds.
Here are the areas of policy action that UNESCO provides:
Policy Action Areas
- Policy Area 1: Ethical Impact Assessment
- Policy Area 2: Ethical Governance and Stewardship
- Policy Area 3: Data Policy
- Policy Area 4: Development and International Cooperation
- Policy Area 5: Environment and Ecosystems
- Policy Area 6: Gender
- Policy Area 7: Culture
- Policy Area 8: Education and Research
- Policy Area 9: Communication and Information
- Policy Area 10: Economy and Labour
- Policy Area 11: Health and Social Well-Being
You might now realize why some are overwhelmed by the UNESCO listing. If you are a company that is crafting AI systems and you want to quickly establish an AI Ethics basis for your efforts, the odds are that you are not especially tasked with broad strokes worldwide matters such as international cooperation, social well-being, and similar globally sobering and weighty matters. That is a bridge too far for your otherwise relatively narrow and focused AI activities.
As I earlier observed, please do not let the density and scope of the UNESCO AI Ethics concordance intimidate you from wading into the contents. I assure you that many meaty morsels are utterly applicable to companies of all sizes and shapes. Do a little unpacking of the wording and you’ll find tremendously valued nuggets about essential Ethical AI considerations.
I’d like to share with you in some detail one of the elements that I think you’ll find notably intriguing, namely Policy Area #1 on Ethical Impact Assessments.
I am a strong advocate of companies undertaking an AI Ethics Impact analysis of their AI-related activities. Sadly, not many firms are yet doing this. Once the proverbial mess hits the fan in terms of AI adoption, only then is there a sudden and usually panicked attempt at doing an AI Ethics Impact assessment. The problem with a belated attempt is that the horse is usually already out of the barn and the solutions to cope with the AI unethical dilemmas brewing become harder to solve and often much more expensive to contend with, including severe reputational harm to the brand of the firm and Pandora’s box of potential legal troubles.
Better to be safe than sorry is the catchphrase for doing your AI Ethics Impact analyses upfront.
Various methodologies can be used when performing an AI Ethics Impact assessment. The UNESCO document doesn’t lay out the specifics of such approaches but does at least emphasize the importance of doing the proper and timely groundwork. I’ve selected a few excerpts to illustrate the type of concerns and components that are listed regarding AI Ethics Impacts evaluations:
- “Member States should introduce frameworks for impact assessments, such as ethical impact assessment, to identify and assess benefits, concerns and risks of AI systems, as well as appropriate risk prevention, mitigation and monitoring measures, among other assurance mechanisms.”
- “Member States and private sector companies and civil society should investigate the sociological and psychological effects of AI-based recommendations on humans in their decision-making autonomy.”
- “Member States and business enterprises should implement appropriate measures to monitor all phases of an AI system life cycle, including the functioning of algorithms used for decision-making, the data, as well as AI actors involved in the process, especially in public services and where direct end-user interaction is needed, as part of ethical impact assessment.”
- “The assessment should also establish appropriate oversight mechanisms, including auditability, traceability and explainability, which enable the assessment of algorithms, data and design processes, as well as include an external review of AI systems.”
You can silently cross out the “Member States” phrasing and insert your company name if that will help to showcase how the pronouncements made about AI Ethics Impact assessments would be relevant to your particular firm or entity. I might add that the realm of AI Ethics does apply to both for-profit and non-profit entities, such that it doesn’t matter whether you are in business for profit or a social enterprise since one way or another any devising or use of AI has to be undertaken in an Ethical AI manner.
No excuses, no exceptions.
At this juncture of this substantive discussion, I’d bet that you are desirous of some illustrative examples that might showcase the UNESCO AI Ethics guidelines.
There is a special and assuredly popular set of examples that are close to my heart. You see, in my capacity as an expert on AI including the ethical and legal ramifications, I am frequently asked to identify realistic examples that showcase AI Ethics dilemmas so that the somewhat theoretical nature of the topic can be more readily grasped. One of the most evocative areas that vividly presents this ethical AI quandary is the advent of AI-based true self-driving cars. This will serve as a handy use case or exemplar for ample discussion on the topic.
Here’s then a noteworthy question that is worth contemplating: Does the advent of AI-based true self-driving cars illuminate anything about the UNESCO AI Ethics guidelines, and if so, what does this showcase?
Allow me a moment to unpack the question.
First, note that there isn’t a human driver involved in a true self-driving car. Keep in mind that true self-driving cars are driven via an AI driving system. There isn’t a need for a human driver at the wheel, nor is there a provision for a human to drive the vehicle. For my extensive and ongoing coverage of Autonomous Vehicles (AVs) and especially self-driving cars, see the link here.
I’d like to further clarify what is meant when I refer to true self-driving cars.
Understanding The Levels Of Self-Driving Cars
As a clarification, true self-driving cars are ones where the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.
These driverless vehicles are considered Level 4 and Level 5 (see my explanation at this link here), while a car that requires a human driver to co-share the driving effort is usually considered at Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-ons that are referred to as ADAS (Advanced Driver-Assistance Systems).
There is not yet a true self-driving car at Level 5, and we don’t yet even know if this will be possible to achieve, nor how long it will take to get there.
Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend, see my coverage at this link here).
Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different than driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).
For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.
You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.
Self-Driving Cars And AI Ethics Guidelines
For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task.
All occupants will be passengers.
The AI is doing the driving.
One aspect to immediately discuss entails the fact that the AI involved in today’s AI driving systems is not sentient. In other words, the AI is altogether a collective of computer-based programming and algorithms, and most assuredly not able to reason in the same manner that humans can.
Why is this added emphasis about the AI not being sentient?
Because I want to underscore that when discussing the role of the AI driving system, I am not ascribing human qualities to the AI. Please be aware that there is an ongoing and dangerous tendency these days to anthropomorphize AI. In essence, people are assigning human-like sentience to today’s AI, despite the undeniable and inarguable fact that no such AI exists as yet.
With that clarification, you can envision that the AI driving system won’t natively somehow “know” about the facets of driving. Driving and all that it entails will need to be programmed as part of the hardware and software of the self-driving car.
Let’s dive into the myriad of aspects that come to play on this topic.
First, it is important to realize that not all AI self-driving cars are the same. Each automaker and self-driving tech firm is taking its approach to devising self-driving cars. As such, it is difficult to make sweeping statements about what AI driving systems will do or not do.
Furthermore, whenever stating that an AI driving system doesn’t do some particular thing, this can, later on, be overtaken by developers that in fact program the computer to do that very thing. Step by step, AI driving systems are being gradually improved and extended. An existing limitation today might no longer exist in a future iteration or version of the system.
I trust that provides a sufficient litany of caveats to underlie what I am about to relate.
We are primed now to do a deep dive into self-driving cars and the Ethical AI possibilities entailing the UNESCO AI Ethics guidelines.
In my extensive coverage of AI-based self-driving cars, I have covered many examples of how AI Ethics comes to the fore. The UNESCO AI Ethics guidelines are rather extensive and I am not going to try and go through them in detail herein as to examples pertaining to each stated Ethical AI value or principle (the size of this column would be enormous). Please refer generally to my column coverage on AI Ethics and you’ll readily see many real-world examples that dovetail into the UNESCO AI Ethics precepts.
What we can briefly do here is take a look at the earlier stated AI Ethics Impacts assessment approach as a vital tool for what entities should do when devising or using AI systems. If you’d like to see a full-blown example of such assessments, you might want to take a look at the analysis and template that I co-authored as part of a Harvard-led study on citywide policy issues entailing AI-based autonomous vehicles and self-driving cars, see the coverage at this link here.
Let’s dip our toes into such waters.
Envision that an AI-based self-driving car is underway on your neighborhood streets and seems to be driving safely. At first, you had devoted special attention to each time that you managed to catch a glimpse of the self-driving car. The autonomous vehicle stood out with its rack of electronic sensors that included video cameras, radar units, LIDAR devices, and the like. After many weeks of the self-driving car cruising around your community, you now barely notice it. As far as you are concerned, it is merely another car on the already busy public roadways.
Lest you think it is impossible or implausible to become familiar with seeing self-driving cars, I’ve written frequently about how the locales that are within the scope of self-driving car tryouts have gradually gotten used to seeing the spruced-up vehicles. Many of the locals eventually shifted from mouth-gaping rapt gawking to now emitting an expansive yawn of boredom to witness those meandering self-driving cars.
Probably the main reason right now that they might notice the autonomous vehicles is because of the irritation and exasperation factor. The by-the-book AI driving systems make sure the cars are obeying all speed limits and rules of the road. For hectic human drivers in their traditional human-driven cars, you get irked at times when stuck behind the strictly law-abiding AI-based self-driving cars.
That’s something we might all need to get accustomed to, rightfully or wrongly.
Back to our tale.
Contemplate the seemingly inconsequential question of where self-driving cars will be roaming to pick up passengers. This seems like an abundantly innocuous topic. We will use the tale of the town or city that has self-driving cars to highlight the perhaps surprisingly potential specter of AI-related biases.
At first, assume that the AI was roaming the self-driving cars throughout the entire town. Anybody that wanted to request a ride in the self-driving car had essentially an equal chance of hailing one. Gradually, the AI began to primarily keep the self-driving cars roaming in just one section of town. This section was a greater money-maker and the AI system had been programmed to try and maximize revenues as part of the usage in the community.
Community members in the impoverished parts of the town were less likely to be able to get a ride from a self-driving car. This was because the self-driving cars were further away and roaming in the higher revenue part of the locale. When a request came in from a distant part of town, any request from a closer location that was likely in the “esteemed” part of town would get a higher priority. Eventually, the availability of getting a self-driving car in any place other than the richer part of town was nearly impossible, exasperatingly so for those that lived in those now resource-starved areas.
You could assert that the AI pretty much landed on a form of statistical and computational biases, akin to a form of proxy discrimination (also often referred to as indirect discrimination). The AI wasn’t programmed to avoid those poorer neighborhoods. Instead, it “learned” to do so via the use of the ML/DL.
It was assumed that the AI would never fall into that kind of shameful quicksand. No specialized monitoring was set up to keep track of where the AI-based self-driving cars were going. Only after community members began to complain did the city leaders realize what was happening.
The horse got out of the barn.
As I mentioned earlier, it is vital to upfront work on AI Ethics Impact assessments. Why didn’t the city or town do their homework before they gave a green light for the use of self-driving cars on their public roadways? Why didn’t the automaker or self-driving tech firm do their homework before they paraded their self-driving cars on the public roadways of that town or city?
It’s a shame. A darned shame.
Conclusion
A memorable quote from the UNESCO document might be a fruitful way to conclude this discussion about AI Ethics: “AI technologies can deepen existing divides and inequalities in the world, within and between countries, and that justice, trust and fairness must be upheld so that no country and no one should be left behind, either by having fair access to AI technologies and enjoying their benefits or in the protection against their negative implications, while recognizing the different circumstances of different countries.”
Benjamin Franklin was known for his famous quip that by failing to prepare, you are preparing to fail. We need to be adopting AI Ethics principles. We need to be acting on the gist of AI Ethics principles. And we need to be doing our homework beforehand and preparing mindful and useful AI Ethics Impacts assessments to make sure that we are prepared for what AI is going to do.
An ounce of prevention is worth a pound of cure, maybe especially so in the case of adopting AI.
Stay connected with us on social media platform for instant update click here to join our Twitter, & Facebook
We are now on Telegram. Click here to join our channel (@TechiUpdate) and stay updated with the latest Technology headlines.
For all the latest Technology News Click Here