Was it an amazing opportunity to get a glimpse of the future or was it an outrageous publicity stunt that made a mockery of lawmaking and lawmakers?
You decide.
I am referring to the headline-making spectacle and some say discomforting brouhaha last week of the UK Parliament having an AI robot provide “testimony” during a meeting of the Communications and Digital Committee in the Upper Chamber of the House of Lords. For roughly an hour or so, the Committee “interacted” with an AI robot, though the truth of the matter is that this really wasn’t much interaction and more like a prearranged scripted affair.
As per our modern time’s polarization, some pundits fully embraced the actions of the Committee and felt that this inclusion of an AI robot was a historical moment that shall forever stand tall in the record of humanity. We are finally acknowledging the advances in AI and how robots in especially humanoid form will seemingly soon be an integral part of society. Meanwhile, caustic comments came from pundits on the other side of this coin in that they emphasized the AI was essentially a rigged contrivance and the whole stunt was not only ridiculous, but it sadly and badly made a mockery of an esteemed legislative body.
All in all, the matter can be at least viewed as an intriguing and notable lesson associated with AI Law and AI Ethics. We ought to see what the circumstance showcases and identify potential insights associated with where AI is today, where it is heading, and how we need to be mindful of both ethical ramifications and legal ones. For my ongoing and extensive coverage of AI Ethics and AI Law, see the link here and the link here, just to name a few.
Let’s unpack what happened in the case of an AI robot that was heard and seen by British legislators during an official session and forever is cast upon the formal record (the recorded session is available online, and if you do opt to watch it, please make sure to see the segment that precedes the flamboyant AI robot flaunting since there are serious discussions entailing expert witnesses covering AI societal, legal, and geo-political considerations).
The Core Of The Matter
A kind-of robot that is partially humanoid in appearance was propped up and brought to “testify” in a session that ostensibly was about the impact of technology on creative industries. Named as Ai-Da, you might already be vaguely familiar with this robot due to its having been touted in the news for several years now, especially known for being used to produce works of art such as paintings and poetry.
The backstory is that the robot was devised in 2019 and is considered the invention of Aidan Meller, though some dispute coining him as the inventor since the device was put together by a company called Engineered Arts and in conjunction with researchers from various universities such as Oxford, Birmingham, and Leeds. By his own admission, Aidan is more so an artist and art dealer, rather than a technologist.
Note that the robot is named as Ai-Da which is notably similar to the first part of Aidan’s name, though the claim is that the robot is named after Ada Lovelace and is meant to honor her important contributions to the early days of computing. This brings up indirectly a controversy associated with Ai-Da.
You see, Aidan Meller steadfastly refers to the AI robot as she.
During any discussions about the robot, the use of “her” or “she” is repeatedly indicated. You might at initial hunch consider this to be a simple and easy way to make reference to the machinery. If they want to refer to the robot as a female, so be it.
Not so, some heatedly contend.
First, by assigning gender labeling, you are subtly but significantly anthropomorphizing the robot. The robot is genderless, yet by giving it a gender entitlement you are crossing over into a semblance of humanness. People tend to assume that a “she” or “he” is associated with humankind. A sneaky ploy of getting people to accept and falsely ascribe human qualities to an AI robot consists of referring to the system in a gender-focused manner.
Secondly, making matters perhaps even more egregious, the head and face of the robot have been contoured to appear like that of a human female. This adds to the anthropomorphizing propensity and causes most people to immediately and intrinsically react as though the robot is a person.
Third, some further question whether the selection of the female gender is a proper choice if indeed gender assigning to robots is to be tolerated. You could argue that this is going to blatantly objectify women. A counter-argument is that this is a sort of compliment to women and cleverly so in this case by trying to attach the naming to that of Ada Lovelace.
The gist is that there are a lot of controversies surrounding this particular robot and that also raises crucial questions about the ins and out’s of how we are going to proceed with confounding human qualities with those of AI robots.
Some are also being misled into believing that such robots are sentient or nearly so.
I want to make it absolutely clearcut that this AI robot and indeed all of AI is not at all yet sentient. This is worthwhile to mention because staging AI humanoid-looking robots to appear to be sentient is an underhanded form of trickery and confuses the public at large. The same can be said for AI that is entirely online and not contained within a robotic form such as those AI chatbots that you might end up interacting with. We keep seeing and hearing outrageous claims that AI is sentient or on the cusp of being so.
This is abundantly false.
Let’s make sure then that we are on the same page about the nature of today’s AI.
There isn’t any AI today that is sentient. We don’t have this. We don’t know if sentient AI will be possible. Nobody can aptly predict whether we will attain sentient AI, nor whether sentient AI will somehow miraculously spontaneously arise in a form of computational cognitive supernova (usually referred to as the singularity, see my coverage at the link here).
The type of AI that I am focusing on consists of the non-sentient AI that we have today. If we wanted to wildly speculate about sentient AI, this discussion could go in a radically different direction. A sentient AI would supposedly be of human quality. You would need to consider that the sentient AI is the cognitive equivalent of a human. More so, since some speculate we might have super-intelligent AI, it is conceivable that such AI could end up being smarter than humans (for my exploration of super-intelligent AI as a possibility, see the coverage here).
I’d strongly suggest that we keep things down to earth and consider today’s computational non-sentient AI.
Realize that today’s AI is not able to “think” in any fashion on par with human thinking. When you interact with Alexa or Siri, the conversational capacities might seem akin to human capacities, but the reality is that it is computational and lacks human cognition. The latest era of AI has made extensive use of Machine Learning (ML) and Deep Learning (DL), which leverage computational pattern matching. This has led to AI systems that have the appearance of human-like proclivities. Meanwhile, there isn’t any AI today that has a semblance of common sense and nor has any of the cognitive wonderment of robust human thinking.
Be very careful of anthropomorphizing today’s AI.
ML/DL is a form of computational pattern matching. The usual approach is that you assemble data about a decision-making task. You feed the data into the ML/DL computer models. Those models seek to find mathematical patterns. After finding such patterns, if so found, the AI system then will use those patterns when encountering new data. Upon the presentation of new data, the patterns based on the “old” or historical data are applied to render a current decision.
I think you can guess where this is heading. If humans that have been making the patterned upon decisions have been incorporating untoward biases, the odds are that the data reflects this in subtle but significant ways. Machine Learning or Deep Learning computational pattern matching will simply try to mathematically mimic the data accordingly. There is no semblance of common sense or other sentient aspects of AI-crafted modeling per se.
Furthermore, the AI developers might not realize what is going on either. The arcane mathematics in the ML/DL might make it difficult to ferret out the now-hidden biases. You would rightfully hope and expect that the AI developers would test for the potentially buried biases, though this is trickier than it might seem. A solid chance exists that even with relatively extensive testing that there will be biases still embedded within the pattern-matching models of the ML/DL.
You could somewhat use the famous or infamous adage of garbage-in garbage-out. The thing is, this is more akin to biases-in that insidiously get infused as biases submerged within the AI. The algorithm decision-making (ADM) of AI axiomatically becomes laden with inequities.
Not good.
All of this has notably significant AI Ethics implications and offers a handy window into lessons learned (even before all the lessons happen) when it comes to trying to legislate AI.
Besides employing AI Ethics precepts in general, there is a corresponding question of whether we should have laws to govern various uses of AI. New laws are being bandied around at the federal, state, and local levels that concern the range and nature of how AI should be devised. The effort to draft and enact such laws is a gradual one. AI Ethics serves as a considered stopgap, at the very least, and will almost certainly to some degree be directly incorporated into those new laws.
Be aware that some adamantly argue that we do not need new laws that cover AI and that our existing laws are sufficient. They forewarn that if we do enact some of these AI laws, we will be killing the golden goose by clamping down on advances in AI that proffer immense societal advantages.
In prior columns, I’ve covered the various national and international efforts to craft and enact laws regulating AI, see the link here, for example. I have also covered the various AI Ethics principles and guidelines that various nations have identified and adopted, including for example the United Nations effort such as the UNESCO set of AI Ethics that nearly 200 countries adopted, see the link here.
Here’s a helpful keystone list of Ethical AI criteria or characteristics regarding AI systems that I’ve previously closely explored:
- Transparency
- Justice & Fairness
- Non-Maleficence
- Responsibility
- Privacy
- Beneficence
- Freedom & Autonomy
- Trust
- Sustainability
- Dignity
- Solidarity
Those AI Ethics principles are earnestly supposed to be utilized by AI developers, along with those that manage AI development efforts, and even those that ultimately field and perform upkeep on AI systems.
All stakeholders throughout the entire AI life cycle of development and usage are considered within the scope of abiding by the being-established norms of Ethical AI. This is an important highlight since the usual assumption is that “only coders” or those that program the AI are subject to adhering to the AI Ethics notions. As prior emphasized herein, it takes a village to devise and field AI, and for which the entire village has to be versed in and abide by AI Ethics precepts.
Now that I’ve laid a helpful foundation on these topics, we are ready to jump further into the use of Ai-Da in the UK legislative committee hearing.
The Robot Talks And The Lawmakers Listen
As mentioned earlier, this particular committee was aiming to examine the impact of technology on creative industries. You can certainly then see why it might have been clever to have a robot that does artistry such as paintings come to the fore. Lots of vital questions are occurring right now as to whether being able to create art is a supremely human talent or whether AI can do likewise.
See my coverage at the link here of that recent art contest that awarded AI as a winner. The incident raised hackles and opened further the can of worms as to whether artistic aptitude requires a soul and ought to be reserved for humanity.
Besides the philosophical dilemma of AI producing art, there are also the day-to-day practical matters that arise too:
- Will human artists go out of business and be replaced by AI artists?
- If AI artists are copying or utilizing human artistry as a basis, aren’t human artists being ripped off?
- Will human artists be construed as lesser in artistry than “masterful” AI artists?
- Should we keep separate that there are human artists and AI artists, avoiding a comparison?
- And so on
The UK Committee could rightfully insist that discussing AI and robotics was highly germane to their legislative assignment. That being said, it is one thing to discuss the topic and altogether a different animal when heralding a robot to be given the solemn duty of being able to seemingly testify.
To try and get around the potential outcry of giving an AI system such revered prominence, this was stated during the Committee hearing: “The robot is providing evidence, but it is not a witness in its own right. And I don’t want to offend the robot, but it does not occupy the same status as a human and that you as its creator, are ultimately responsible for the statements” (see the online video for this excerpt).
You might suggest that the statement helpfully clarifies the matter.
Others complained that it was too little too late, in the sense that after the grand hullaballoo about the robot and upon giving it rarified airtime during the session, such a disclaimer was weak and seemed to be a contrived wink-wink. It was also considered unsettling that the remark included a sort of apology to the robot, as though once again anthropomorphizing the AI, though this was followed by an indication that the cheeky phrasing was principally in jest.
The lawmakers proceeded to ask questions of the Ai-Da robot. This is yet another example of a wink-wink going on.
The questions were provided to Aidan Meller beforehand. He indicated that the questions were apparently fed into an AI LLM (Large Language Model), which you might know are the latest Natural Language Processing (NLP) systems that do language mimicry. See my analysis of where LLMs are headed, which I indicate at the link here.
According to Meller, the responses produced by the AI LLM were then cleaned up by its human handlers, though we don’t know how much clean-up was undertaken (wholescale or teensy tiny). Lawmakers each read aloud respective shall we say scripted questions and the AI robot then verbalized the “derived” answers.
Impressive?
Hokey?
Probably the most apt reaction is that it was staged and a misleading portrayal that was paraded as though it was real, and that the disclaimers being proffered were done to try and save face. By making the disclaimers, almost anything could be gotten away with since the high ground allowed them to always contend that they have been aboveboard. Of course, the media pretty much tended to grab the snippets that showcased the robot responding and omitted the disclaimers. Not our fault, you can imagine any lawmaker exhorting, we had the disclaimers for all to hear.
Wink-wink.
Let’s assess some so-called statements made by the AI robot.
Try this on for size: “I am, and depend on, computer programs and algorithms. Although not alive, I can still create art” (see the online video for this excerpt).
Notice once again the sneaky way that this remark has been composed. The message says that the AI is not alive. The message says that AI is based on computer programs and algorithms. This certainly seems like a fair and balanced way to convey what is going on.
But is it really that fair and balanced?
The use of the word “I” permeates the remarks being verbalized by the AI robot. When you hear the word “I” you are almost guaranteed to ascribe a semblance of humanness. How can you not? The power of the word “I” is that it brings forth the fullness of being a human. I am a human being and I am representing to you something of importance. Likewise, you assume the same when an AI system is rigged to use the word “I” (this is commonplace and sneakily defended by suggesting that it makes the AI seem friendly and more readily dealt with, see my analysis at this link here).
A somewhat ingenious means of scooting around complaints about anthropomorphizing is by dovetailing denials into the very aspect that is in fact doing the anthropomorphizing. Short and sweet, “I am not alive” provides a beauteous example. You’ve got the high ground of making sure that the AI has said it is not alive, and underneath you’ve got the “I” that tugs at heartstrings and makes people believe the AI is alive.
It is a work-of-art twofer to behold.
At one point during the session, a remark was made that the AI robot exemplified a level of sophistication that was beyond expectations. I suppose that readily illustrates how well-orchestrated and staged things were. Some viewed the remark of apparent sophistication as an indicator of how some might be readily lulled into believing what they see and hear. Others defended the remark by pointing out that it all depends on how low you expected to begin with.
Murphy’s law of doing demonstrations of high-tech did end up clouding things, somewhat.
The AI robot became unresponsive and appeared to be hung or stuck, doing so in the middle of the proceedings. It might have been equivalent to having your laptop or desktop computer go into sleep mode. There had already been an extended period of time when the AI robot was not actively being used and instead Meller was providing testimony. The passage of time could have led to the AI system going into sleep or hibernation mode.
Meller then explained to the attendees that he would need to essentially do a reboot of the AI. He also put sunglasses over the camera eyes of the robot.
The sunglasses deserve a side tangent herein.
You might be puzzled as to why sunglasses would need to be put over the camera eyes of the robot. We know that humans use sunglasses to guard their eyes against the sun. Well, sure enough, one of the lawmakers asked whether the sunglasses were needed to aid the AI robot in avoiding being overcome by visual stimuli upon being reawakened, as it were.
I dare say this vividly reinforces the anthropomorphizing going on.
The hasty reply by Meller was that the AI robot tended to make odd faces when doing a reboot.
As an educated guess, possibly what happens is that the camera eyes are not especially well calibrated and the motorized controls are not fine-tuned to deal with reboots, such that when a reboot occurs the mechanism might loosey-goosey roll around a bit. A human looking at the camera eyes might find this frightening or disturbing, as though a human might have their eyes rolling around when they faint or have some aliment that afflicts them.
In any case, the camera eyes going into a semi-oddish pattern would (you could stridently argue) actually help to showcase that the robot is a robot and not a human. To keep the masquerade going, you would shrewdly put sunglasses over the camera eyes (or perhaps simply engineer better camera eyes). The convenient excuse is that you don’t want to panic anyone that is watching the robot.
Anyway, back to the mainstay of things.
Reporting in the media did bring up the fact that the AI robot stalled during the session. And though the need to do the reboot was likely or assuredly unplanned, you could say that it might have bolstered the publicity. Yes, it made the AI robot seem more fallible and provided a bit of levity, and allowed headlines to offer relief that robots aren’t yet perfected.
No harm, no foul.
One can only wish that when doing high-tech demos of new AI systems, having your AI go blank and needing to reboot could be considered a positive or favorable factor. Much of the time, you would get laughed off the stage and be ever cursing your bad luck that the AI stalled right in the midst of your most important moment.
Debates About Creativity
Buried within the theatrics were some significant points about creativity.
I’ll briefly whet your appetite.
Consider this statement made by Meller (the human): “Creativity is not restricted to subjective internal conscious brain processes. Creativity can very much be done very well by AI in lots of remarkable ways. It can be studied and mimicked, and that is a game changer” (see the online video).
Can creativity be studied and mimicked?
Some hold dearly the belief that only humans are creative. Machines cannot be creative. Furthermore, creativity is intangible and you cannot copy it or try to recreate it. Others vehemently disagree. They point to the zillions of courses and classes about how to be creative. Clearly, humans can learn to be creative, they bellow. Whether AI can also be creative is perhaps a different question, but if creativity can be taught, this seems to open the door to “teaching” AI to be creative.
Unfortunately, these useful and well-needed debates about such matters were clouded and tarnished by other remarks, such as this one about having the AI robot there to present, namely that it was claimed to be useful to “have the technology be able to speak for itself” (a remark made during the session).
If you interpret the remark as suggesting that the AI was sentient and able to represent itself as to what creativity consists of, well, that’s a bridge too far. For those that wish to be generous, you could interpret the remark as meaning that perhaps having the technology before them was able to illuminate what the capabilities of the technology currently consist of. That’s generosity, for sure.
In one breath, the claim was made that the AI has self-awareness (for my picking apart of AI self-awareness contentions, see the link here). Later on, it was stated that the AI doesn’t understand what was being indicated and that people are apt to project themselves onto the capacities of the AI.
Things seesawed like this throughout.
You would have every right to be perplexed by the session. It was like watching a ping-pong match. One moment the AI robot was portrayed as miraculous, creative, and beyond imagination. A moment later the AI robot was said to be unable to comprehend and was merely a machine. To clarify, this was at times as remarked by Meller himself, covering both sides of the tabletop match.
Perhaps one of his most memorable remarks was this: “On some level, she is a deception, trying to mirror confusion of the world” (see the online video).
Conclusion
Anyone that has closely observed the AI robotics field will likely remember that there was a somewhat similar hubbub when the robot known as Pepper essentially testified in the UK House of Commons for a committee hearing in 2018. Ai-Da now has the “history in the making” of being the first to do so in the upper house or the House of Lords.
What are we to make of these AI robots that are allowed into such hallowed chambers to serve as witnesses or give testimony?
You might try to emphasize that AI is getting more advanced and we could find ourselves confronting the thorny issue of whether AI deserves legal personhood, see my discussion at the link here. In that case, perhaps we should get used to the idea of AI robots standing or sitting in front of lawmakers to make their legal merits known.
Preposterous, some exclaim.
Today’s AI and AI robots are not anywhere close to being sentient or anything akin to sentience. We are losing our heads by ascribing humanness to AI. Letting AI make appearances before lawmakers can be indubitably misleading. It is anthropomorphizing gone to extremes.
Another viewpoint is that if the only means to get society to pay attention to where AI is headed, even albeit non-sentient AI, doing some AI-flashy dog-and-pony shows serves as a worthy cost. We need everyone to be thinking about AI.
This includes and especially so, lawmakers too.
We of course want our lawmakers to know the difference between what is AI real and what is AI fakery. Laws that might get devised based on an incomplete or misunderstood semblance of what AI constitutes would undoubtedly be insufferable. Hopefully, lawmakers will strive toward the truth about AI and be well-advised to straighten out any AI misconceptions.
Laws entailing AI will need to be respectable.
The last word for now on this heady topic goes to Louis D. Brandeis, renowned former Associate Justice of the U.S. Supreme Court that markedly said this: “If we desire respect for the law, we must first make the law respectable.”
Enough said.
Stay connected with us on social media platform for instant update click here to join our Twitter, & Facebook
We are now on Telegram. Click here to join our channel (@TechiUpdate) and stay updated with the latest Technology headlines.
For all the latest Technology News Click Here