Can religion save us from artificial intelligence?

0

Sometimes Rabbi Joshua Franklin knows exactly what he wants to talk about in his weekly Shabbat sermons — other times, not so much. It was on one of those not-so-much days on a cold afternoon in late December that the spiritual leader of the Jewish Center of the Hamptons decided to turn to artificial intelligence.

Franklin, 38, who has dark wavy hair and a friendly vibe, knew that OpenAI’s new ChatGPT program could write sonnets in the style of Shakespeare and songs in the style of Taylor Swift. Now, he wondered if it could write a sermon in the style of a rabbi.

So he gave it a prompt: “Write a sermon, in the voice of a rabbi, about 1,000 words, connecting the Torah portion this week with the idea of intimacy and vulnerability, quoting Brené Brown” — the bestselling author and researcher known for her work on vulnerability, shame and empathy.

The result, which he shared that evening in the synagogue’s modern blond wood sanctuary and later posted on Vimeo, was a coherent if repetitive talk that many in his congregation guessed had been crafted by famous rabbis.

“You’re clapping,” Franklin said after revealing that the sermon he’d just delivered was composed by a computer. “I’m terrified.”

As experiments like Franklin’s and the recent unsettling conversation between a tech columnist and Microsoft’s new chatbot demonstrate just how eerily human-like some AI programs have become, religious thinkers and institutions are increasingly wading into the conversation around the ethical uses of a rapidly expanding technology that might one day develop a consciousness of its own — at least according to its Silicon Valley apostles. Calling upon a wide range of myths from Icarus to the Tower of Babel to the tale of the genie who can grant all our wishes with disastrous results, they are sounding an ancient warning about what happens when humans try to play God.

Before delivering the sermon ChatGPT had written, Rabbi Franklin told his congregation that what he was about to read had been plagiarized.

“Friends,” he began, reading from the AI-scripted sermon, “as we gather today to study the Torah portion of the week, Vayigash, let us consider the importance of developing intimacy in our relationship with others.”

The robotic sermon went on to relate the story of when Joseph, the son of Jacob, was reunited with his brothers after many years. Although they had betrayed him in the past, Joseph greeted them with warmth and love.

“By approaching them with openness and vulnerability he’s able to heal old wounds and create deeper, more meaningful bonds with his siblings,” Franklin read. “This is a powerful lesson for all of us.”

It was an adequate sermon, but not the one Franklin would have penned. “What was missed was the idea of how we find God in meaningful encounters with others,” he said later. “How community and relationship creates God in our lives.” In other words, a sense that the sermon had sprung from the lived experience of a yearning, questing, suffering human being rather than an algorithmic formula.

It’s possible that spiritual leaders may one day be replaced by robots as AI continues to improve (anything is possible).

But most theologians say other ethical concerns relating to AI are more pressing. They worry about growing financial inequality as automation eliminates thousands of jobs, and they questionour ability to exercise free will as we increasingly rely on computer algorithms to make decisions for us in medicine, education, the judicial system and even how we drive our cars and what we watch on TV.

On a more existential level, the better AI becomes at mimicking human intelligence, the more it will call into question our understanding of sentience, consciousness, and what it means to be human. Do we want AI-driven robots to become our servants? Will they have feelings? And are we obliged to treat them as if they did?

These ethical dilemmas may feel new, but at their core they represent issues that faith traditions such as Judaism, Islam and Christianity have grappled with for millennia, religious leaders say.

Though religious institutions have not always behaved ethically in the past, they have centuries of experience parsing moral conundrums through the lens of their own belief systems, said the Rev. James Keenan, a Catholic theologian at Boston College.

“There are certain ways you can say all these great traditions are problematic, but they also have their insights and wisdom,” he said. “They have a history behind them that is worth tapping into.”

For the record:

10:22 a.m. March 29, 2023The name of 16th century Rabbi Judah Loew ben Bezalel of Prague is misspelled as Rabbi Judah Low ben Bezulel.

Since the earliest days of AI research in the 1950s, the desire to create a human-like intelligence has been compared to the legend of the golem, a mythical creature from Jewish folklore, created by powerful rabbis from mud and magic to do its master’s bidding. The most famous golem is the one allegedly made by the 16th century Rabbi Judah Low ben Bezulel of Prague to protect the Jewish people from antisemitic attacks. The golem also served as an inspiration for Mary Shelley’s Frankenstein.

For centuries, the idea of an animate creature made by man and lacking a divine spark or a soul has been part of the Jewish imagination. Rabbis have argued over whether a golem can be considered a person, if it could be counted in a minyan (the quorum of 10 men required for traditional Jewish public prayer), if it could be killed, and how it should be treated.

From these rabbinic discussions, an ethical stance on artificial intelligence emerged long before computers were invented, said Nachson Goltz, a law professor at Edith Cowan University in Australia who has written about the Jewish perspective on AI. While it is considered permissible to create artificial entities to assist us in our tasks, “we must remember our responsibility to keep control over them, and not the other way around,” he wrote.

Rabbi Eliezer Simcha Weiss, a member of the Chief Rabbinate Council of Israel, echoed this idea in a recent speech. “In every story of the golem, the golem is finally destroyed or dismantled,” he said. “In other words, the lesson the rabbis are teaching is that anything man makes has to be controlled by man.”

The rabbis also concluded that while a golem could not be considered a full person, it was still important to treat it with respect.

“The way we treat these things impacts us,” Goltz said. “The way we treat them determines the development of our own characters and sets the future course of our own exercise of moral agency.”

An etherial computer-generated being emerges from between two circular planes

(Mark Pernice / For The Times)

Another cautionary tale from Jewish and Muslim folklore revolves around the djinn, a nonhuman entity made of smokeless fire that can occasionally be bound by humans and chained to their will. This is the origin of the story of the genie who can grant us anything we want, but cannot be put back in the bottle.

“The stories of the genie are an example of what happens when you ask a nonhuman to grant human wishes,” said Damien Williams, a professor of philosophy and data science at the University of North Carolina in Charlotte. “What comes out the other side seems shocking and punitive, but if you actually trace it back, they are simply granting those desires to the fullest extent of their logical implications.”

Islam provides another ethical lens through which to look at AI development. A legal maxim of Islamic jurisprudence states that repelling harm always has priority over the procurement of benefits. From this point of view, a technology that helps some people but puts others out of a job would be deemed unethical.

“Most of these technologies are being designed and deployed in many cases for the sake of it, and the harms that accrue are sometimes probabilistic,” said Junaid Qadir, a professor of electrical engineering at Qatar University who organized a conference on Islamic Ethics and AI. “We don’t know what it will be, technology has its own unintended effects.”

Overall, Islamic tradition encourages a cautious approach to new technology and its uses, said Aasim Padela, a professor of emergency medicine and bioethics at the Medical College of Wisconsin.

“Things that try to make you rival God are not thought of as a purpose to pursue,” he said. “Trying to seek immortality through a brain transfer, or to make a better body then the one you’ve got, those impulses are to be checked. Immortality is in the afterlife, not here.”

“The Rule of St. Benedict,” a book written in the sixth century as a guide to monastic life, offers an answer to questions about how we can ethically interact with AI, both now and in the future when we might encounter robots with human features, said Noreen Herzfeld, a professor of theology and computer science at St. John’s University and the College of St. Benedict in Minnesota.

In the section of the book addressing the cellarer — the person in charge of the monastery’s provisions — St. Benedict tells the cellarer to treat everyone who comes to him with a kind word, and to treat all the inanimate objects in his storehouse “as if they were consecrated vessels of the altar.”

“To me that is something we can apply to AI,” Herzfeld said. “People always come first, but we must treat AI with respect, with care, because all earthly things should be treated with respect. The way you treat things is part of what informs your own character, and informs how you treat the Earth and other human beings. “

The Catholic Church has been especially vocal in the push for an ethics of AI that benefits humanity, centers human dignity and does not have as its sole goal greater profit or the gradual replacement of people in the workplace.

“Indeed, if technological progress increases inequality, it is not true progress,” Pope Francis said in a November 2020 video announcing his prayer intention that robotics and artificial intelligence may always serve humankind.

The Vatican’s goal is not to slow down the development of artificial intelligence, but the church does believe caution is essential, said Paolo Benanti, a Franciscan monk and one of the pope’s chief advisors on new technology.

“On the one hand we do not want to limit any of the transformative impulses that can lead to great results for humanity; on the other hand, we know that all transformations need to have a direction,” he wrote in an email. “We have to be aware that if AI is not well managed, it could lead to dangerous or undesirable transformations.”

To that end, Vatican leaders helped craft the Rome Call for AI Ethics, a pledge first signed in 2020 by representatives of the Pontifical Academy for Life, IBM, Microsoft and the Italian Ministry of Innovation among others to champion the creation of AI technologies that are transparent, inclusive and impartial. On Jan. 10, leaders from Jewish and Islamic communities gathered at the Vatican to add their signatures as well.

Asking technology companies to prioritize humanitarian goals rather than corporate interests may feel like an unlikely proposition, but the influence of the religious hierarchy on AI ethics shouldn’t be underestimated, said Beth Singler, a professor of digital religions at the University of Zurich.

“It can help the masses of believers to think critically and use their voices,” she said. “The more the conversation is had by significant charismatic voices like the pope, it will only increase the possibility that people can, from a grassroots level, appreciate what’s going on and do something about it.”

Benanti agreed.

“The billions of believers who inhabit the planet can be a tremendous force for turning these values into something concrete in the development and application of AI,” he said.

As for Franklin, the rabbi in the Hamptons, he said that his experiment with ChatGPT has ultimately left him feeling that the rise of AI could have an upside for humanity.

Though artificial intelligence may be able to mimic our words, and even read our emotions, what it lacks is the ability to feel our emotions, understand our pain on a physical level, and connect deeply with others, he said.

“Compassion, love, empathy, that’s what we do best,” he said. “I think that ChatGPT will force us to hone those skills and become, God willing, more human.”

Stay connected with us on social media platform for instant update click here to join our  Twitter, & Facebook

We are now on Telegram. Click here to join our channel (@TechiUpdate) and stay updated with the latest Technology headlines.

For all the latest Technology News Click Here 

Read original article here

Denial of responsibility! Rapidtelecast.com is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – [email protected]. The content will be deleted within 24 hours.
Leave a comment