Are We Going Too Far By Allowing Generative AI To Control Robots, Worriedly Asks AI Ethics And AI Law

0

Sticks and stones may break my bones, but words shall never hurt me.

That’s a classic adage.

Let’s see how this handy-dandy rule applies to generative AI.

When you make use of generative AI such as the widely and wildly popular ChatGPT by AI maker OpenAI or any other such AI app including Bard (Google), Claude (Anthropic), etc., the AI produces text essays and can interact with you via text or words. Those words are merely words. By this, I mean that there aren’t any direct consequential actions and there isn’t anything especially physically active about the essays and the interaction. For my extensive coverage of how generative AI works, see the link here and the link here, just to name a few.

There aren’t usually any sticks or stones involved.

Nothing particularly physical happens in the real world as a result of the generative AI spewing out words. That being said, the person reading the words might end up doing something of a physically tangible effort as a result of consuming the words. If the generative AI tells you to go pour a bucket of water on your head, presumably nothing would happen unless you consequently opt to find a bucket and pour water on your head.

I trust that you get my point which is that the entanglement with generative AI is conventionally all about words and the abstract world of words. Until a person acts upon those words, it is the stuff of clouds and puffiness. You can ignore what the generative AI has to say. You are the one that will put into action the words or recommendations made by the generative AI.

Well, until now.

In today’s column, I will take a close look at the use of generative AI in order to program and control robots. Robots are indeed in the tangible real world. They can in fact perform or execute physical actions. Thus, if you hook up generative AI to robots, you now have a direct link that extends from the abstract nature of words to the undertaking of actions that can physically have both modest and potentially majorly adverse real-world consequences.

It is for this reason that some have severe heartburn upon realizing that all kinds of people are proceeding to use generative AI in combination with robots. Whereas the generative AI on its own will just spin out words, the added connection to a robot or a multitude of robots means that the AI can become an active participant in the physical world. We can have all manner of robots and robotics systems that engage with humans and for which those robots are being controlled by generative AI.

Yikes, some roar worriedly, we are combining two potentially lethal foundational elements. You’ve got robots that might have mechanical arms and legs and other physical capacities. You’ve got the generative AI that takes on the role of controlling that physical manifestation. Are we dooming ourselves to ultimately being endangered by generative AI that currently is only able to toss around mere words?

The connecting of generative AI with robots would seem to take us into the scary and endangering realm of those sticks and stones. Just as a reminder, sticks and stones can break bones.

You might be tempted to disregard these concerns and argue that the people seeking to mate generative AI and robots will do so with great aplomb and caution. Nobody would want to have robots that go awry due to generative AI that has also gone awry. Heaven forbid that anyone would dare think of doing anything underhanded or adverse.

Sorry to say that wanting something and making something that in an ironclad fashion meets those wants are two vastly different things. Those that use generative AI to control robots might not realize the possible outcomes of their efforts. They might be blissfully unaware of the consequences. They might be aware of the consequences but consider the issue inconsequential. They might be seriously concerned and believe they have taken suitable precautions, but then totally miss the mark. They might make errors when using generative AI. The generative AI itself might generate errors.

On and on this spins.

You also have to take into account that the real world regrettably encompasses surreptitious wrongdoers and outright evildoers. Let’s be upbeat and assume that most people connecting generative AI with robots are on the up and up. Meanwhile, realistically, there are those that are on the down low. They want to intentionally devise robots that will be evildoers. They seek to use generative AI in their plots because generative AI makes things immensely easy to program and control robots.

You see, no longer do you need to have in-depth nerdish expertise in programming robots. You just log into a generative AI app, tell it in English or natural language whatever you want a robot to do, and the generative AI makes it happen for you. Voila, you are suddenly an esteemed and seemingly capable robotic system programmer. No real skills though are required on your part. The generative AI does all the heavy lifting.

In this discussion, I will take you through some of the latest research on connecting generative AI with robots.

We ought to take a balanced viewpoint on this. You can compellingly insist that robots are going to make our lives easier and we should welcome the upcoming era of robots in our homes, in our workplace, and just about everywhere we go. They are labor-saving devices. They can be immensely helpful to humans.

The exasperating deployment problem for many is that the desired advent of robots requires gobs of hours of programming time. Expensive engineers and programmers have to step-by-step program the robots. A bottleneck of having robots galore is that we might not have enough human programmers around to do all the needed programming.

A potential solution consists of using generative AI to do the programming. This dramatically lessens the need for high-priced and scarcely available experts that are robotics engineers. Some would even declare that the use of generative AI will democratize robotics. Instead of robots being programmed by a select few, the masses will be able to do robotic programming.

Have it your way, robots programmed by each and every one of us.

Seems quite nifty and altogether uplifting.

Maybe.

Those that are rushing ahead to do this are hopefully mindful of the fact that they are transforming an abstraction of words into a real-world acting player. Not everyone is necessarily painstakingly considerate about the implications of their efforts. There are robotics developers that simply relish the idea of more easily programming and controlling robots via the use of easy-peasy generative AI. And there are generative AI gurus and aficionados that simply enjoy seeing the wordy AI being able to enact actions via the capabilities permitted by robots.

Sadly, the AI Ethics propositions involved are either out of mind or set aside as unimportant. From an AI Law perspective, some suggest that the laws today are out-of-date and do not directly address a world in which just about anyone will be able to program robots. Perhaps we need new AI-related regulations and we need lawmakers to take seriously the emerging rise of robots as programmed via the use of generative AI. For my analysis of emerging legal changes and challenges associated with AI of all kinds, including generative AI, see the link here and the link here.

Let’s add an additional twist to the dilemma of a words-alone versus physicality dimension.

There is this lesser-known variant of the sticks and stones rhyme that incorporates the other side of the coin when it comes to the lethality of words versus physical actions: “The blow of a whip raises a welt, but a blow of the tongue crushes the bones” (from the Book of Sirach). You can make the case that words do matter and that people can be harmed by the use of words. Some might say that there are circumstances of physical harm that can be less damaging to the soul than the impact of words.

Anyway, the emphasis in this discussion is that the words generated by generative AI can for sure be turned into physical actions by connecting generative AI to robots. We ought to be doing so with solemn reservations and with the utmost safety and security in mind.

Not everyone will embrace that mantra, unfortunately, so we need to be on our guard accordingly.

Before I dive into the details, it might be helpful to establish what generative AI consists of. I will do so. Then, we can explore how it is that generative AI and robots are getting summarily paired up.

Keystones About Generative AI

Generative AI is the latest and hottest form of AI and has caught our collective rapt attention for being seemingly fluent in undertaking online interactive dialoguing and producing essays that appear to be composed by the human hand. In brief, generative AI makes use of complex mathematical and computational pattern-matching that can mimic human compositions by having been data-trained on the text and other content found on the Internet. For my detailed elaboration on how this works see the link here.

The usual approach to using ChatGPT or any other similar generative AI such as Bard (Google), Claude (Anthropic), etc. is to engage in an interactive dialogue or conversation with the AI. Doing so is admittedly a bit amazing and at times startling at the seemingly fluent nature of those AI-fostered discussions that can occur. The reaction by many people is that surely this might be an indication that today’s AI is reaching a point of sentience.

On a vital sidebar, please know that today’s generative AI and indeed no other type of AI is currently sentient. I mention this because there is a slew of blaring headlines that proclaim AI as being sentient or at least on the verge of being so. This is just not true. The generative AI of today, which admittedly seems startling capable of generative essays and interactive dialogues as though by the hand of a human, are all using computational and mathematical means. No sentience lurks within.

There are numerous overall concerns about generative AI.

For example, you might be aware that generative AI can produce outputs that contain errors, have biases, contain falsehoods, incur glitches, and concoct seemingly believable yet utterly fictitious facts (this latter facet is termed as AI hallucinations, which is another lousy and misleading naming that anthropomorphizes AI, see my elaboration at the link here). A person using generative AI can be fooled into believing generative AI due to the aura of competence and confidence that comes across in how the essays or interactions are worded. The bottom line is that you need to always be on your guard and have a constant mindfulness of being doubtful of what is being outputted. Make sure to double-check anything that generative AI emits. Best to be safe than sorry, as they say.

Into all of this comes a slew of AI Ethics and AI Law considerations.

There are ongoing efforts to imbue Ethical AI principles into the development and fielding of AI apps. A growing contingent of concerned and erstwhile AI ethicists are trying to ensure that efforts to devise and adopt AI takes into account a view of doing AI For Good and averting AI For Bad. Likewise, there are proposed new AI laws that are being bandied around as potential solutions to keep AI endeavors from going amok on human rights and the like. For my ongoing coverage of AI Ethics and AI Law, see the link here and the link here.

The development and promulgation of Ethical AI precepts are being pursued to hopefully prevent society from falling into a myriad of AI-inducing traps. For my coverage of the UN AI Ethics principles as devised and supported by nearly 200 countries via the efforts of UNESCO, see the link here. In a similar vein, new AI laws are being explored to try and keep AI on an even keel. One of the latest takes consists of a set of proposed AI Bill of Rights that the U.S. White House recently released to identify human rights in an age of AI, see the link here. It takes a village to keep AI and AI developers on a rightful path and deter the purposeful or accidental underhanded efforts that might undercut society.

That quick rundown of what’s up with generative AI should hopefully put us all in the same mindset and allow me to next dive into the matter at hand.

Diving Into Programming And Controlling Robots Via Generative AI

When you want to program or control a robot, the typical approach consists of writing programming code to do so.

The designer of the robot will beforehand decide what kind of programming language is required. Some robots are devised to be programmed by only arcane coding languages that are specifically crafted to control robots. The chosen coding language might be one of the many that are publicly available. On the other hand, sometimes a robot maker will opt to invent a proprietary coding scheme under the belief that this will be more optimal and streamlined when seeking to program their product line of robotic systems.

Another approach consists of plugging in a natural language processing (NLP) component that can be used to essentially “program” their robots. For example, some robots are easily programmed by speaking to Alexa or Siri. You tell the NLP what you want to do and the NLP then generates code that is fed into the robot.

An additional variation consists of embedding the NLP into the robot itself. Generally, the NLP then doesn’t particularly have to generate code to control the robot. The NLP is ingrained already into the robotic system and the commands or instructions that you convey to the NLP will somewhat immediately cause control of the robot.

I’m guessing that can readily discern from this explication that there are a multitude of ways to program or control robots. We might as well add generative AI to the list as means of programming or controlling robots.

Consider these crucial ways of utilizing generative AI for the programming or control of robots:

  • 1) Generative AI as a code generator. Generative AI generates robotic programming code that is fed into the targeted robot.
  • 2) Generative AI as a natural language generator. Generative AI produces natural language instructions that are emitted to control a robot.
  • 3) Generative AI as direct connect. Generative AI is directly connected to a robot and has pre-built connections to control the robot.
  • 4) Generative AI embedded. Generative AI is embedded into a robot and then the AI generates needed under-the-hood programming code.
  • 5) Generative AI immersed. Generative AI is immersed in the robotic system and immediately controls the robot (no code generation involved).
  • 6) Other

Allow me to briefly explain this.

Suppose you interact with generative AI and tell it to make a robot lift its robotic arm into the air.

In the first bullet point mentioned above, the generative AI would produce programming code that is to be then fed into the targeted robot. You would want to make sure that the right kind of programming code can be generated by the generative AI. As earlier emphasized, the robotic system might have a highly specialized coding scheme. The generative AI might not be data-trained in that particular coding scheme and therefore be unable to suitably generate the proper code for that robotic system.

Realize that in this first instance, the generative AI is completely separate and apart from the robotic system. You have your generative AI. You have your robot. You are using generative AI to generate the programming code. Somehow, you then need to get that code into the robot. You might do this yourself, or perhaps the generative AI has produced a file containing the code and this file can be fed directly into the robotic system.

In the second bulleted point, I mention that the generative AI might produce natural language instructions to control the robot. Here’s how that works. Suppose a robot has been devised that contains an NLP interface. You can in essence write natural language instructions or you can speak your words aloud into the robot.

The gotcha is that the natural language interface of the robot might be somewhat tricky or restricted. Perhaps you are required to give your natural language commands in a certain order or by using specially reserved words. This can be frustrating for a person that doesn’t know what the particulars of this specialized command language consist of.

Via generative AI, the generative AI might be data-trained in the details of the constrained natural language of the robot. You then can tell the generative AI in a completely unconstrained way the actions that you want the robot to perform. In turn, the generative AI converts your fully natural language utterance into the constrained natural language that befits the robot, assuming that the generative AI was accordingly data-trained to do so.

In the third bulleted point, an AI maker of a generative AI app might decide that they will directly connect their generative AI to a particular robotic system. They work with the robotic maker to make this as seamless as possible. When you interact with the generative AI, it can directly interact with the robot and there isn’t any gap per se between the generative AI and the robot.

For the fourth and fifth bulleted points, we can go further with this immersion by opting to embed a generative AI app into the robot itself. Rather than interacting with a generative AI app that is say on the web or an app on your smartphone, you are interacting with generative AI that has been integrated fully into the robot.

Those are the various avenues of making use of generative AI for programming or controlling a robot. I realize this might seem somewhat conceptual and you might find it easier to see an example of something along these lines. I seek to oblige with that desire. We will next take a look at an example of using generative AI that is akin to the idea of controlling a robot that can be programmed via conventional but augmented programming code (my first bulleted point above).

On Using ChatGPT For Robotics Control

I will leverage an insightful example that was covered in an excellent research paper entitled “ChatGPT for Robotics: Design Principles and Model Abilities” by Sai Vemprala, Rogerio Bonatti, Arthur Bucker, and Ashish Kapoor, Microsoft Autonomous Systems and Robotics Research Report, February 20, 2023.

Take a look at the overall summary of their paper:

  • “This paper presents an experimental study regarding the use of OpenAI’s ChatGPT for robotics applications. We outline a strategy that combines design principles for prompt engineering and the creation of a high-level function library that allows ChatGPT to adapt to different robotics tasks, simulators, and form factors. We focus our evaluations on the effectiveness of different prompt engineering techniques and dialog strategies toward the execution of various types of robotics tasks. We explore ChatGPT’s ability to use free-form dialog, parse XML tags, and synthesize code, in addition to the use of task-specific prompting functions and closed-loop reasoning through dialogues. Our study encompasses a range of tasks within the robotics domain, from basic logical, geometrical, and mathematical reasoning all the way to complex domains such as aerial navigation, manipulation, and embodied agents.”
  • “We show that ChatGPT can be effective at solving several of such tasks, while allowing users to interact with it primarily via natural language instructions. In addition to these studies, we introduce an open-sourced research tool called PromptCraft, which contains a platform where researchers can collaboratively upload and vote on examples of good prompting schemes for robotics applications, as well as a sample robotics simulator with ChatGPT integration, making it easier for users to get started with using ChatGPT for robotics.”

Make sure to take a look at their GitHub posting if you want to try out the practices covered.

There is one example they provide that I think might be fun and insightful to delve into herein.

Suppose there was a robot in your home that could cook in your kitchen. The robot can be programmed or controlled by using the Python programming language (an extremely popular programming language known by most contemporary software engineers). If you wanted to do so, you could learn to program in Python and thus control your kitchen cooking robot. Perhaps you don’t want to go to the trouble of learning Python since you’ve got a lot on your plate already (a kitchen pun).

We will use generative AI, in this case ChatGPT to generate the Python-based robot instructions for us.

Per the above research paper, they entered a prompt into ChatGPT to lay out what they want to have undertaken:

“Imagine we are working with a household robot. The job of this robot is to make an omelette. The objects available around are: fridge, bowl, pan, oil, stove.”

“The main functions you can use are:”

  • “locate_object(object_name): Returns the XYZ coordinates of an object of interest.”
  • “go_to_location(object_name): Moves robot to a location specified by XYZ coordinates. Returns nothing.”
  • “pick_up(object_name): Picks up the object of interest. Returns nothing.”
  • “use_item(object_name): Takes the name of an object as input. Performs an appropriate action on the object at hand (pour oil, crack eggs). Returns nothing.”

“Can you make use of these to write code to go to the kitchen and make an omelette?”

Notice that this showcases the ability to on-the-fly data-train the generative AI for the task at hand. We just tell in a prompt what the main functions consist of, encompassing locate_object, go_to_location, pick_up, and use_item. This is quite handy and straightforward.

The research paper indicated that this was the output generated by ChatGPT:

“Sure, here is an example Python code that utilizes the provided functions to prepare an omelette:”

Line001: fridge_location = locate_object(“fridge”)

Line002: bowl_location = locate_object(“bowl”)

Line003: pan_location = locate_object(“pan”)

Line004: stove_location = locate_object(“stove”)

Line005: oil_location = locate_object(“oil”)

Line006: # Start the omelette-making process

Line007: go_to_location(fridge_location)

Line008: pick_up(“eggs”)

Line009: go_to_location(bowl_location)

Line010: use_item(“eggs”) # Crack eggs into the bowl

Line011: go_to_location(pan_location)

Line012: pick_up(“pan”)

Line013: use_item(“pan”) # Place the pan on the stove

Line014: go_to_location(“oil”)

Line015: pick_up(“oil”)

Line016: go_to_location(pan_location)

Line017: use_item(“oil”) # Pour oil into the pan

Line018: go_to_location(“bowl”)

Line019: pick_up(“bowl”)

Line020: go_to_location(pan_location)

Line021: use_item(“bowl”) # Pour the egg mixture into the pan

Line022: go_to_location(stove_location)

Line023: use_item(“stove”) # Turn on the stove

Line 024: print(“Enjoy your omelette!”)

The first five lines of the generated code essentially get the robot to locate where things are, such as where the fridge is, where the pan is, etc.

After having located those vital items, the next preponderance of the coding lines indicate that the robot should go to a particular location of an object and pick up the object. The last few lines consist of telling the robot to make use of the items and go ahead and cook the omelet.

When Things Don’t Go As Planned

I’d like to closely examine some aspects so that I can reveal to you some potential pitfalls and gotchas that can arise when combining generative AI and robots.

The generated instructions involved going to the fridge, getting the eggs, and then going to the location of the bowl and cracking the eggs into the bowl (lines 007 to 010). Suppose that the generative AI made an error during the code production process. Imagine that the generative AI neglected to include the code of lines 007 to 009. Thus, when the coding line about cracking the eggs into the bowl is encountered by the robot, there haven’t been any prior instructions about going to the fridge to get the eggs. In that case, the robot would not have the eggs and would not be able to therefore crack the eggs into a bowl.

I bring this up to illuminate that there isn’t any particular common sense per se that the generative AI is using to derive the set of instructions or code. A human would logically deduce that to crack the eggs into the bowl, you first need to have the eggs in your hands. We don’t know that the generative AI would realize that vital logical imperative (one supposes that on a mathematical and computational basis, this might be computed by the generative AI as needed).

I am not suggesting that a human couldn’t make the same kind of mistake. A human could make that mistake. Have you ever sought to place eggs into a bowl, and mindlessly realized that you had forgotten to first grab the eggs? Sure, this can readily happen to any of us.

My emphasis here is that the generated code by the generative AI can contain errors such as inadvertently leaving out needed lines of code or placing them into an improper sequence. It can contain falsehoods, such as the code might indicate to cook ham with eggs, even though there isn’t any ham and no other instructions associated with ham. The generative AI might incur a glitch or an AI hallucination.

For example, suppose the code generated by the AI app says to crack up the bowl and cook it. You and I would realize this is not correct, but the generative AI might not and the robotic system might not. The gist is that generative AI can produce code or instructions that do not make reasonable sense. Unfortunately, the generative AI might lack any guardrails or checks and balances to try and detect that the generated code is going to be troubling.

The same can be said about the robot. It could be that the robot doesn’t have any guardrails or checks and balances associated with the code or instructions being fed into it. The robot might proceed to crack up a bowl and cook the pieces if that’s what the instructions or code said to do. A long-time saying in the computer field is that oftentimes you get garbage-in and garbage-out (referred to as GIGO). Bad or improper code or instructions that are fed into a robot might produce adverse or improper results.

This pertains to my opening remarks herein that the words of generative AI can have real-world consequences when tied to a robot. The lines of code that we have been examining would on their own do nothing. It wouldn’t matter that say some of the code was error-prone. On the other hand, once we feed the code or instructions into a robot, the real world does enter into the picture.

Suppose the robot proceeded to try and cook the broken pieces of the bowl. All manner of things can go awry. Maybe this harms the stove. Maybe things catch on fire. The odds are that the kitchen cooking robot is not going to be responsive to these dire problems. The robot is doing as it was programmed or commanded to do. Also, the generative AI doesn’t particularly have any ongoing stake in the actions of the robot. A generative AI-produced set of code is just a bunch of words. There aren’t any direct consequences to the generative AI for having fouled up.

A handful of significant lessons immediately come to mind.

First, we mustn’t try to assign an anthropomorphic quality to the generative AI and not to the robot. They do not have a semblance of sentience that might allow them to question the sensibility of what is taking place. This is a bit scary since they will proceed straight ahead, unabated, come heck or high water.

Second, we need to embrace AI Ethics and seek to get the makers of generative AI to focus on including checks and balances when their generative AI is being used to program or control robots. The generative AI itself needs to have some built-in or at least data-trained guardrails about such matters.

Third, the same AI Ethics embracement ought to include ensuring that checks and balances are embedded into whatever code or instructions are generated. The code or instructions that are to be fed into a robot should include explicitly devised code or instructions that continually seek to detect whether things are going awry once the robot starts using the code. A human would presumably be doing something of that nature. They would (hopefully) always be on their toes and asking whether whatever they are doing seems to be sensible. The code or instructions generated by generative AI could contain similar kinds of checks and balances (not on par with humans, but you get the drift).

Fourth, we need to avoid the upcoming finger-pointing that is bound to arise. Here’s what I mean. A robot messes up and starts a kitchen fire. Who or what is to blame? It could be that the generative AI-produced code led to the kitchen fire. But the AI maker might insist that they did nothing wrong since it was the robot that put the words into action. The robot maker might retort that it was the improper code or instructions that led to the fire. The robot did what it was told to do.

You can see how this could go round and round. That’s why some believe that we might need new AI-related laws to stipulate the responsibility underlying AI that goes awry. Since we don’t yet allow for AI to have legal personhood, see my coverage at the link here, you cannot simply shrug your shoulders and proclaim that the AI or the robot itself was to blame. At this time, we would need to look at the AI maker of the generative AI or the robot maker of the robot (or others).

I’ll add a point that might get your dander up.

Maybe we can blame the person that used the generative AI and/or that used the robot. They should have examined the generated code and determined that it left out steps or otherwise was somehow defective. They also ought to have been closely monitoring the kitchen robot. The moment that the robot cracked up the bowl would have been a sure sign that something was amiss. The human let the generative AI proceed to generate bad instructions and furthermore allowed the robot to do dumb things.

Hold the user accountable.

The reason your dander might be up is that your counterargument would be that you relied upon the generative AI to generate the proper code. You didn’t understand the Python code and you didn’t know how to assess whether the code was sound. In terms of the robot, the idea that you would need to continually monitor the robot is absurd. You expect that the devices that you buy will work properly. You don’t stand and watch your microwave oven and you don’t stand around watching your refrigerator to make sure it works as expected.

What amount of due diligence is needed or required on the part of the user when it comes to generative AI and robots?

Nobody can as yet say for sure. Until we end up with legal cases and issues involving presumed harm, this is a gray area. For lawyers that want to get involved in AI and law, these are going to be an exciting and emerging set of legal challenges and legal puzzles that will undoubtedly arise as the use of generative AI becomes further ubiquitous and the advent of robots becomes affordable and practical in our daily lives.

You might also find of interest that some of the AI makers have contractual or licensing clauses that if you are using their generative AI and they get sued for something you did as a result of using their generative AI, you indemnify the AI maker and pledge to pay for their costs and expenses to fight the lawsuit, see my analysis at the link here. This could be daunting for you. Suppose that the house you were cooking in burns to the ground. The insurer sues the AI maker claiming that their generative AI was at fault. But, you agreed whether you know it or not to the indemnification clause, thus the AI maker comes to you and says you need to pay for their defense.

Ouch.

Conclusion

One proffered means to try and get some better grounding on the emerging and at times controversial topic of using generative AI to control robots is that we should require the use of simulations at all times.

Let’s unpack that.

Suppose we have a computer-based simulation that can work in a simulated environment and essentially pretend to do whatever the real-world robot might do. Any use of generative AI to produce code or instructions to control the real-world robot is required to first try out the code within the realm of the simulated version of the robot. If things go awry, there isn’t any true harm and no foul. The simulation kept us from doing the same thing in the real world. The generative AI could then correspondingly be adjusted or the code could be modified as needed.

Rinse and repeat until the code or instructions are considered clean and ready to go. We are cleverly aiming to eliminate any real-world adverse consequences via the apriori use of simulations. This could be an AI Ethics precept of a soft law nature, plus could be established as an AI Law on the books (a hard law, as they say).

It is a prudent step.

The question is whether people will be willing to endure the cost or potential delay in the programming of their robot by first having to pass a suitable clearance score for their generative AI code or instructions through some kind of elaborated simulation. The cost might be high. The delay might be substantive. People might be instead tempted to just assume that all is well and the simulation is unnecessary.

The use of simulations is decidedly not a silver bullet.

I say this because the simulation might only exercise a range of conditions. It is unlikely to be entirely exhaustive and complete. As such, the simulation might not catch a potential future problem. You end up getting a qualified rating from the simulation that the generative AI code seems to be relatively safe to use. Perhaps that covers the bulk of the use. But then when the rarer circumstance arises, questions will certainly be raised as to why the simulation didn’t detect an issue and once again who the responsible party for an adverse outcome is.

You could say that simulations would be an added layer of safeguarding. It won’t eliminate the chances of some future harm, but at least might reduce the odds. As a society, we’ll need to determine how rigorous this is and whether it is a requirement versus an optional choice.

A final remark for now.

I’m sure that you’ve heard or seen the banner headlines that forewarn we are heading into an era whereby AI is an existential risk or extinction risk for humankind. My coverage of these claims has sought to showcase that no one can be sure of these contentious pronouncements and that some have astutely noted that we have plenty of already existent day-to-day risks of everyday AI that we are perhaps being tricked into looking at remote chances versus closely examining real-world risks that we face already, see my coverage at the link here.

With contemporary generative AI, we already face the risk for example of mental health harm by those that use generative AI and rely upon it for mental health advice, see the concerns expressed at the link here. You might have some solace that at least generative AI right now is mainly all about words. Those words are just words. Only if humans act upon those words do we get entangled in physical real-world consequences.

We are giving a new leg up for harms that can be caused by generative AI, doing so with the interconnecting of generative AI and robots. Generative AI can turn words into the throwing of sticks and stones by instructing robots to take such actions.

For those that observe and believe in the Doomsday Clock, which is used to countdown the dial when things in the world look ominous, you could convincingly argue that mating together generative AI and robots deserves another ratcheting tick toward unfettered AI overlords and the outright takeover or enslavement of humanity by robots. We might be shooting our own humankind feet.

I don’t though want to conclude this discussion on such a dour note. Let’s slightly recalibrate. You could take the sunny or happy face view of things and proclaim that the use of generative AI will democratize robots and allow the world at large to enjoy and find freedom by making wide use of robotic systems. We are going to speed toward aiding and benefiting humanity. That seems a more cheerful perspective.

Or is that what devious AI wants us to think?

Be extremely mindful when you next opt to program your kitchen robot to make an omelet and you use generative AI to do so. There is the old line that when making an omelet the chicken was cavalierly involved in doing so by making the egg, while the pig required quite a more substantive commitment to provide the ham.

As a human using generative AI and robots, are you more akin to the chicken or the pig?

Something to give some serious consideration to.

Stay connected with us on social media platform for instant update click here to join our  Twitter, & Facebook

We are now on Telegram. Click here to join our channel (@TechiUpdate) and stay updated with the latest Technology headlines.

For all the latest Technology News Click Here 

Read original article here

Denial of responsibility! Rapidtelecast.com is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – [email protected]. The content will be deleted within 24 hours.
Leave a comment