In AI race, Microsoft and Google choose speed over caution

0

SAN FRANCISCO — In March, two Google employees, whose jobs are to review the company’s artificial intelligence products, tried to stop Google from launching an AI chatbot. They believed it generated inaccurate and dangerous statements.

Ten months earlier, similar concerns were raised at Microsoft by ethicists and other employees. They wrote in several documents that the AI technology behind a planned chatbot could flood Facebook groups with disinformation, degrade critical thinking and erode the factual foundation of modern society.

The companies released their chatbots anyway. Microsoft was first, with a splashy event in February to reveal an AI chatbot woven into its Bing search engine. Google followed about six weeks later with its own chatbot, Bard.

The aggressive moves by the normally risk-averse companies were driven by a race to control what could be the tech industry’s next big thing: generative AI, the powerful new technology that fuels those chatbots.

That competition took on a frantic tone in November when OpenAI, a San Francisco startup working with Microsoft, released ChatGPT, a chatbot that has captured the public imagination and now has an estimated 100 million monthly users.

The surprising success of ChatGPT has led to a willingness at Microsoft and Google to take greater risks with their ethical guidelines set up over the years to ensure their technology does not cause societal problems, according to 15 current and former employees and internal documents from the companies.

The urgency to build with the new AI was crystallized in an internal email sent last month by Sam Schillace, a technology executive at Microsoft. He wrote in the email, which was viewed by The New York Times, that it was an “absolutely fatal error in this moment to worry about things that can be fixed later.”

When the tech industry is suddenly shifting toward a new kind of technology, the first company to introduce a product “is the long-term winner just because they got started first,” he wrote. “Sometimes the difference is measured in weeks.”

Last week, tension between the industry’s worriers and risk-takers played out publicly as more than 1,000 researchers and industry leaders, including Elon Musk and Apple’s co-founder Steve Wozniak, called for a six-month pause in the development of powerful AI technology. In a public letter, they said it presented “profound risks to society and humanity.”

Regulators are already threatening to intervene. The European Union proposed legislation to regulate AI, and Italy temporarily banned ChatGPT last week. In the United States, President Joe Biden on Tuesday became the latest official to question the safety of AI.

“Tech companies have a responsibility to make sure their products are safe before making them public,” he said at the White House. When asked if AI was dangerous, he said, “It remains to be seen. Could be.”

Stay connected with us on social media platform for instant update click here to join our  Twitter, & Facebook

We are now on Telegram. Click here to join our channel (@TechiUpdate) and stay updated with the latest Technology headlines.

For all the latest  Business News Click Here 

Read original article here

Denial of responsibility! Rapidtelecast.com is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – [email protected]. The content will be deleted within 24 hours.
Leave a comment