In today’s column, I will examine the newest of those seemingly endless series of proclamations about AI as an existential risk to humankind. The latest instance made a big splash yesterday, Tuesday, May 30, 2023. Furthermore, surprisingly but with devout intent, the impactful statement this time was a mere twenty-two words in size (in contrast, most such public statements have been several paragraphs or several pages long narratives).
Turns out that this petiteness in length has ingenuously hastened largeness in stature, as I’ll be explaining shortly.
Global reactions both within the AI field and by the public at large to the missive have already been swift and boisterous. Some insist that we desperately need these dire pronouncements or else we are all going to silently and without ample preparations be utterly devastated by AI. Others exhort that this is entirely premature and amounts to doomsday predictions or outright fearmongering. Within the extremes of the counterbalancing views is a rough mixture of partial belief and partial disbelief concerning the value of these alarm bell-ringing declarations.
As you know, it seems that nearly all topics these days are often met with polarizing viewpoints, no matter what the topic might consist of. Even the most innocuous of matters can give rise to dramatic and radical clamoring from opposite sides of a spectrum.
That being said, you would be hard-pressed to assert that discussing or making declarations about the wiping out of humanity due to AI taking us over should somehow avoid our societal tendency to polarize. You could try to argue that if there is any chance at all of AI enslaving or destroying humankind then we should all be working together arm-in-arm to prevent this. The retort is that we are deluding ourselves into a false state of fear and panic, out of which all manner of adverse inadvertent consequences can occur.
I’ll be covering these perspectives as I unpack the pronouncement made.
For those of you wondering why these outspoken statements about AI are incessantly right now garnering massive headlines and spurring widespread social media chatter, the root of this recent flood can be laid at the emergence of generative AI such as the widely and widely popular ChatGPT by AI maker OpenAI, along with other generative AI apps such as GPT-4 (OpenAI), Bard (Google), Claude (Anthropic), etc.
Generative AI is the latest and hottest form of AI and has caught our collective rapt attention for being seemingly fluent in undertaking online interactive dialoguing and producing essays that appear to be composed by the human hand. In brief, generative AI makes use of complex mathematical and computational pattern-matching that can mimic human compositions by having been data-trained on text found on the Internet. For my detailed elaboration on how this works see the link here.
The usual approach to using ChatGPT or any other similar generative AI is to engage in an interactive dialogue or conversation with the AI. Doing so is admittedly a bit amazing and at times startling at the seemingly fluent nature of those AI-fostered discussions that can occur. The reaction by many people is that surely this might be an indication that today’s AI is reaching a point of sentience.
To make it abundantly clear, please know that today’s generative AI and indeed no other type of AI is currently sentient.
Whether today’s AI is an early indicator of a future sentient AI is up to highly controversial debate. The claimed “sparks” of sentience that some AI experts believe are showcased have little if any ironclad proof to support such claims. It is conjecture based on speculation. Skeptics contend that we are seeing what we want to see, essentially anthropomorphizing non-sentient AI and deluding ourselves into thinking that we are skip-and-hop away from sentient AI. As a bit of up-to-date nomenclature, the notion of sentient AI is also nowadays referred to as attaining Artificial General Intelligence (AGI). For my in-depth coverage of these contentious matters about sentient AI and AGI, see the link here and the link here, just to name a few.
Into all of this comes a plethora of AI Ethics and AI Law considerations.
There are ongoing efforts to imbue Ethical AI principles into the development and fielding of AI apps. A growing contingent of concerned and erstwhile AI ethicists are trying to ensure that efforts to devise and adopt AI takes into account a view of doing AI For Good and averting AI For Bad. Likewise, there are proposed new AI laws that are being bandied around as potential solutions to keep AI endeavors from going amok on human rights and the like. For my ongoing coverage of AI Ethics and AI Law, see the link here and the link here.
The development and promulgation of Ethical AI precepts are being pursued to hopefully prevent society from falling into a myriad of AI-inducing traps. For my coverage of the UN AI Ethics principles as devised and supported by nearly 200 countries via the efforts of UNESCO, see the link here. In a similar vein, new AI laws are being explored to try and keep AI on an even keel. One of the latest takes consists of a set of proposed AI Bill of Rights that the U.S. White House recently released to identify human rights in an age of AI, see the link here. It takes a village to keep AI and AI developers on a rightful path and deter the purposeful or accidental underhanded efforts that might undercut society.
With those foundational points, we are ready to jump into the latest pronouncement and explore the erstwhile splash that was made by it.
What The Statement Says And Why It Is Controversial
I hope you are ready to examine and assess the recent AI existential risk declaration.
Perhaps you might be wise to find a comfy chair and have a glass of spirits available, just in case needed.
The succinct statement that was released and that has managed to have a number of AI researchers and AI scientists add their signatories to is this (as was posted on May 30, 2023, by the Center for AI Safety):
- “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
There it is.
Seems straightforward.
One interpretation is that we would be sensible and prudent to give attention to the possibility of AI going awry and causing humankind extinction and that the level of attention should be on par with other considered akin risks such as the chances of potentially encountering worldwide killer pandemics or of a destructive cataclysmic global nuclear war.
Easy-peasy as a proposition. Nothing seems oddish or outlandish in that interpretation. If someone told you that there was something that could gut humankind, shouldn’t we be putting sufficient consideration toward that disruptive deadly matter? Of course, we should, comes the prudent reply.
To make things easier to comprehend, the handy-dandy comparison to pandemics and nuclear war is essential in the statement and provides a hefty one-two punch since it immediately establishes how serious and disconcerting the AI takeover could be.
The abstract nature of an AI takeover is made much more personal and tangible via the claimed analogous settings. We have recently lived through a pandemic and already know the devastation that can be wrought. We all understand the dangers of nuclear weapons and realize the immense catastrophic results that would occur in any large-scale nuclear strike conflagration. AI is said to be yet another such endangerment.
Well, that is where, for some, the statement goes off the rails.
They ask with genuine pointedness whether those comparisons are outsized, unfair, and made with a tinge of intentional doomster spuriousness at play.
They ask these direct questions:
- Is AI really on par with the specter of global pandemics?
- Is AI really on par with the ominous destruction rising from a nuclear war?
- Does the comparison create a false connection to already known and worrying factors of modern-day life?
- Etc.
Those skeptics would say that the statement should not have included the stated comparisons. In essence, the statement should be presumably shortened to this:
- “Mitigating the risk of extinction from AI should be a global priority” (i.e., a shortened variant that lops off the part about pandemics and nuclear war).
A counterargument would be that such a shortened declaration would be loosey-goosey. The concern is that saying that the AI risk of extinction ought to be (merely) a global priority is insufficient and lacks punch. Everyone might agree that this is something of a global priority, but then the ranking would potentially leave the AI risk topic in the dustbin of global prioritization. You see, by purposefully tying the AI risk to the likes of pandemics and nuclear war, this ensures that the matter lands into the topmost stratosphere of worldwide interests and worries.
Think of it this way. We can’t let AI extinction risks fall into the everyday muck and mire of routine global troubles. This would shortchange the attention. It would be one of those paper-pushing endeavors of going back and forth with little resolve and inadequate desire to deal with it. When push came to shove, other topics such as pandemics and nuclear war would continually overtake the AI topic. Mitigating AI extinction risks would become hidden, unresolved, and rarely see the light of day.
I trust that you see the conundrum of whether it is considered judicious, reasonable, and altogether fair to put AI extinction risks on the same bandwagon as pandemics and nuclear wars.
Things get murkier.
Some would argue that placing AI extinction risk at the pinnacle of global issues will hamper all sorts of other vitally crucial global priorities. Are we to believe that AI extinction risk should be more important than worldwide hunger? What about worldwide poverty? On and on the list goes. Those other vital issues might be pushed downward by the addition of AI extinction risk at the top of the heap.
The difficulty is that we might give undue short shrift to those other compelling topics due to the desire for now heightened (some say hyped) and stridently proclaimed AI extinction risk. Clear your head, some bellow. We know for sure that worldwide hunger exists, and so does poverty, and other such readily demonstrative issues. The viewpoint is that by focusing on AI extinction risks, which are less demonstrative, you are willing to undercut the in-hand and utterly valid issues that we have right in front of us and that clearly exist today.
On a related tangent, you might find of interest my coverage of how AI can both help and potentially hinder the United Nations SDGs (sustainable development goals), see the link here. Likewise, I’ve discussed how AI pertains to nation-state geopolitical power positioning, see the link here and the link here.
The ping-pong match of tit-for-tat can be undertaken when arguing about global priorities. For example, some would say that you are assuming a zero-sum gambit whereby the addition of AI extinction risk is going to undermine those other priorities. Not so, the reply comes. We can have our cake and eat it too, namely that the pie expands to encompass AI extinction risk, ergo nothing else is reduced or minimized in priority.
Another retort is that you have to be thinking about short-term and long-term horizons. Even if the AI of today is not immediately an extinction risk, you would be foolhardy to decide to stave off the topic until some later time. The gist is that while you are ignoring the AI extinction risk and paying attention to other global priorities, AI is inevitably and inexorably going to arise and wipe you out. You had your eyes on the short term and failed to give due attention to the long term. Oopsie.
I’ll add another twist if you are willing to dive deeper into this mind-bending dilemma.
Diehard skeptics would say that the statement should not have included the stated comparisons and also should not be tied to extinction risk. The statement should be presumably further shortened to this:
- “Mitigating the risk from AI should be a global priority” (i.e., an even further shortened variant that lops off the part about pandemics and nuclear war, plus removes the extinction element).
That would be a version that likely most skeptics would also be willing to support (not all, mind you, but a lot more would).
As you might guess, the counterargument is that this drastically pruned variant is purely milquetoast. You might as well ask people if they are in favor of apple pie. It would seem that just about everyone could agree that we should mitigate the risks of AI. Period, full stop. In addition, doing so should be done on an international or global basis else we will have some countries that won’t care and we’ll still all be in a sinking ship.
The battle of words rages on.
I had earlier mentioned that the succinctness of the statement was a notable characteristic.
Here’s why.
You can indubitably now see that even a rather short proclamation can incredibly give rise to numerous contentious considerations. The longer ones are rife with zillions of contentious points. Trying to get AI devotees to sign onto the longer missives is hard to do since there is bound to be something in there that will splinter off a variety of alternative perspectives. And, without getting signers, the declaration will likely not get much media stickiness. The names are often what makes the pronouncement have a newsworthy impact.
If you want to maximize the number of signers, the odds are that you’ll need to be short and sweet. Even still, you can bet that some big-name AI devotees will not find the statement suitable. Anyway, there is an art and science to devising AI-related pronouncements. One thing we can say for sure is that this won’t be the last. Many more are going to be heading to the public square. You can take that prediction to the bank.
Okay, go ahead and take a preparatory sip now from the spirits that you have hopefully had within your reach. Please do so before we get to the next set of considerations.
You’ll need it.
There Is Much More To Crow About
Better put on your seatbelt while in that comfy chair. I am going to rapidly cover a slew of jarring bumps in the road that pertain to these kinds of AI extinction statements.
Some cynically contend that these AI extinction statements are being done on a wink-wink basis. The assertion is that this is either hollow posturing or insidiously crafty as a regulatory and societal form of trickery. You’ll have to decide which perspective you find most convincing, either the perspective that this is tomfoolery or whether this is entirely sincere and heartfelt.
I’ll cover these in three main points, doing so with a sense of balance by aiming to cover both sides of the contentious issues.
(1) Virtue Signaling Or Bona Fide Concern
One shrill claim is that this is nothing more than virtue signaling. Let’s unpack that. I’ll try to cover both sides equally.
You can presumably be gallantly virtuous by wanting to save the world and protect humanity from the AI extinction curse. This surely is heroic and thoughtful on the part of those making their voices heard on the looming matter. Thank goodness that there are those in-the-know that are trying their darnedest to make sure humankind survives in the face of destructive AI.
The counterclaim is that this virtuousness is misplaced. It is the sky is falling type of virtuosity. These are Chicken Littles that are stoking confusion and fear across society. They are using their position of influence and misusing it, whether doing so blindly or by intent. It is abject clutter and overstated.
Mull that over and see where you opt to land.
(2) Regulatory Capture Or Protecting Humankind
Another vibrant claim is that this is a sneaky form of what is known as regulatory capture, or what is commonly in the tech world referred to as building a moat around your technology. Again, I’ll try to cover both sides of this.
Suppose you are an existing company that has spent a huge chunk of money on devising today’s AI. You naturally want to profit from that investment. Meanwhile, if other comers can build or devise similar AI, perhaps at a much lower cost due to other advances and awareness of how to do so, you are not going to come out looking pretty on this outcome. Those competitors might knock you out of the water or at least diminish whatever financial stream you had hoped to earn as a needed payback on your AI investment.
How can you disrupt your upcoming competition?
One answer would be to try and get regulators and lawmakers to establish new laws that might be shaped to your benefit. The odds of that happening would seem low if you explicitly tried to tie this to your circumstance. Thus, instead, you use as a cover screen that the new laws are required to cope with the AI extinction risk.
The aim would be to help gently influence new laws about AI that could bolster your current standing and serve to heighten barriers to entry for new entrants that want to compete. For example, if the new AI laws strictly stipulated that all AI apps have to henceforth be scrutinized and approved by the government before being released to the public, this would add a lot of cost to any effort to devise and release new AI. Your existing efforts might be grandfathered in, or perhaps your firm is large enough in existing resources to readily deal with the added requirements.
Small firms would have little chance to compete. Large firms would have to decide whether the added cost is worth whatever they might gain by freshly entering into the AI arena. New firms would have to raise enormous capital to get off the ground and bring their AI to the marketplace. And so on.
This is known as regulatory capture, whereby an attempt is undertaken to influence lawmakers or regulators to enact laws that are favorable to a particular marketplace segment. In the tech world, this is also considered a means of virtually constructing a kind of competitive moat around your castle, as it were. We all know that a moat serves as a barrier to those wishing to storm the castle. New AI laws could potentially serve that same purpose for existing AI companies (or, even for companies that want to move into AI and that they want a fast track to do so, thus this works both ways).
Let’s consider the counterargument on this.
We expect and assume that our regulators and lawmakers will do what is best for their constituents. Any attempt by an AI company to intentionally hone new AI laws will certainly be discerned by the policymakers and be rebuffed. Imagine the backlash toward any AI company that got caught trying to rig the new AI laws.
Furthermore, it seems highly likely that we do genuinely need new AI laws to contend with AI issues involving the lengthy list of AI concerns such as AI that contains undue biases, has insufficient transparency, lacks suitable explainability, etc. Those are legitimate needs, regardless of whether a particular AI company is touting that those laws are needed.
As an aside, not everyone agrees that we do need new AI laws, and instead that existing laws already on the books are well-suited to the AI era. They would also suggest that if we start adding new laws about AI, we are going to get bogged down in years upon years of court wrangling over the meaning and realization of those laws. Existing laws have pretty much worked out most of those kinks. New laws, whether AI-oriented or not, will need to stand the test of time and be challenged throughout our courts.
For more of my discussion on these weighty tradeoffs, see the link here and the link here.
The crux is that some would indicate that it doesn’t matter whether this AI firm or that AI firm wants new AI laws. In the end, the AI laws if properly put together and enforced, would serve to protect us from the AI extinction risk or at least help in mitigating that risk. The aim would be to make balanced laws that presumably do not favor or disfavor one firm over another. Instead, the laws should be protective of society all told and create a playing field of sufficient safety and balance.
Ponder that for a few moments and decide which side you tend toward.
(3) Imaginary AI Or Tangible AI
An outspoken concern about focusing on AI extinction risk is that we are perhaps foregoing day-to-day contemporary AI risk. I’ll seek to cover both sides of this.
Whenever a reference to AI extinction risks arises, most people probably think about AI overlords. Those would be the sentient AI that comes alive, akin to what you see in sci-fi movies and TV shows. The AI miraculously attains sentience and decides that humans aren’t worthy. Since the AI is presumably super intelligent and all-powerful, it can crush us like bugs.
Goodbye, humanity.
One viewpoint is that we have enough AI issues today to deal with that we don’t need to overfocus on some speculative AI demon future. Keep our heads in the game that we already have AI today that has undue biases, lack of transparency, and the other assorted AI Ethics issues that I’ve discussed at length in my columns. We should be putting our scarce resources in the direction of dealing with current AI rather than frolicking into some outsized imaginary AI takeover that might not ever occur or happen eons from now.
The worry is that any new AI laws or other provisions will be aimed at the imaginary future AI. This in turn might allow existing non-sentient AI to continue unabated. We can get ourselves into quite a mess with regular AI and do not necessarily need to achieve sentient AI to wipe us all out. For my analysis of how conventional AI that controls autonomous weapons systems could be a daunting risk to us all, see the link here.
There are several counterarguments to the qualm of overfocusing on an imaginary AI.
One viewpoint is that we can do both, namely we can aim to devise AI Ethics and AI Laws that cover the current issues of AI and also be constructed to deal with the futuristic sentient AI too. We don’t need to split the baby, as some might say (a phrase perhaps worth retiring). Both objectives or goals could possibly be simultaneously pursued. The counter to that point is that we might fail to deal with both and that the limelight will inevitably go to the futuristic AI instead. And so on.
Another viewpoint is that we can muddle along with the existing AI issues. We don’t need any special effort to do so. On the other hand, AI as an extinction risk requires extraordinary attention. Since our very existence depends on coping with the AI extinction risk, naturally and sensibly the bulk of our attention to dealing with AI should go in that direction. The counter to that point is that meanwhile, we end up devasting ourselves with regular AI. And so on.
You are probably shifting around uncomfortably in your comfy chair, trying to figure out which of these myriads of points and counterpoints is best to adhere to. Welcome to the evolving and heated realm of AI and the role of AI Ethics and AI Law. There is no end to the excitement.
Conclusion
I’d bet that some of you might be tossing your hands in the air and decrying that we should just ban AI and be done with all this handwringing. The idea seems simple. If we don’t allow AI, we don’t have to worry one iota about what AI is going to do or become.
Problem solved.
Sorry to say that this does not solve the problem. As I’ve discussed in my analysis of proposals to ban AI, you aren’t in any practical sense going to be able to do so, see the link here. One way or another, people will be devising AI. This can lead to those that have AI and those that don’t. Plus, the AI that they surreptitiously devise will presumably be completely unfettered and unkept, since they are ignoring or flaunting the ban and will seemingly concoct whatever they wish to contrive.
AI is here to stay.
AI is going to be further advanced.
Those are rules of thumb to live by.
We undoubtedly need to be carrying on a societal dialogue about where AI is heading. Trying to bury our heads in the sand doesn’t seem like a viable approach. In that case, you could persuasively say that having proclamations about the future of AI is duly warranted.
The counterargument is that if the statements are repetitive and cover the same ground, or are over-the-top in terms of sentiment, this can lead to a kind of AI overlord alarmist fatigue. Perhaps regulators and lawmakers will tire of the repeated clamoring. Maybe the public will see this as a hackneyed trope. If that occurs, the worry is that all attention to everyday AI concerns, such as the regular AI qualms that I’ve mentioned, will be tossed into the same category as a crying wolf mantra.
Wait for a second, a heated retort arises, is this an implication that we should remain silent? But we cannot remain silent when so much is on the line. Silence would be disastrous.
We might find the famous quote by Pythagoras, the renowned philosopher and acclaimed mathematician to be instructive on these modern-day AI issues: “It is better wither to be silent, or to say things of more value than silence. Sooner throw a pearl at hazard than an idle or useless word; do not say a little in many words, but a great deal in a few. Be silent or say something better than silence.”
You’ll have to judge each new AI-risks proclamation in light of those ancient but thoroughly insightful words.
Stay connected with us on social media platform for instant update click here to join our Twitter, & Facebook
We are now on Telegram. Click here to join our channel (@TechiUpdate) and stay updated with the latest Technology headlines.
For all the latest Technology News Click Here