Microsoft Calls for A.I. Rules to Minimize the Technology’s Risks

0

Microsoft endorsed a crop of regulations for artificial intelligence on Thursday, as the company navigates concerns from governments around the world about the risks of the rapidly evolving technology.

Microsoft, which has promised to build artificial intelligence into many of its products, proposed regulations including a requirement that systems used in critical infrastructure can be fully turned off or slowed down, similar to an emergency braking system on a train. The company also called for laws to clarify when additional legal obligations apply to an A.I. system and for labels making it clear when an image or a video was produced by a computer.

“Companies need to step up,” Brad Smith, Microsoft’s president, said in an interview about the push for regulations. “Government needs to move faster.”

The call for regulations punctuates a boom in A.I., with the release of the ChatGPT chatbot in November spawning a wave of interest. Companies including Microsoft and Google’s parent, Alphabet, have since raced to incorporate the technology into their products. That has stoked concerns that the companies are sacrificing safety to reach the next big thing before their competitors.

Lawmakers have publicly expressed worries that such A.I. products, which can generate text and images on their own, will create a flood of disinformation, be used by criminals and put people out of work. Regulators in Washington have pledged to be vigilant for scammers using A.I. and instances in which the systems perpetuate discrimination or make decisions that violate the law.

In response to that scrutiny, A.I. developers have increasingly called for shifting some of the burden of policing the technology onto government. Sam Altman, the chief executive of OpenAI, which makes ChatGPT and counts Microsoft as an investor, told a Senate subcommittee this month that government must regulate the technology.

The maneuver echoes calls for new privacy or social media laws by internet companies like Google and Meta, Facebook’s parent. In the United States, lawmakers have moved slowly after such calls, with few new federal rules on privacy or social media in recent years.

In the interview, Mr. Smith said Microsoft was not trying to slough off responsibility for managing the new technology, because it was offering specific ideas and pledging to carry out some of them regardless of whether government took action.

There is not an iota of abdication of responsibility,” he said.

He endorsed the idea, supported by Mr. Altman during his congressional testimony, that a government agency should require companies to obtain licenses to deploy “highly capable” A.I. models.

“That means you notify the government when you start testing,” Mr. Smith said. “You’ve got to share results with the government. Even when it’s licensed for deployment, you have a duty to continue to monitor it and report to the government if there are unexpected issues that arise.”

Microsoft, which made more than $22 billion from its cloud computing business in the first quarter, also said those high-risk systems should be allowed to operate only in “licensed A.I. data centers.” Mr. Smith acknowledged that the company would not be “poorly positioned” to offer such services, but said many American competitors could also provide them.

Microsoft added that governments should designate certain A.I. systems used in critical infrastructure as “high risk” and require them to have a “safety brake.” It compared that feature to “the braking systems engineers have long built into other technologies such as elevators, school buses and high-speed trains.”

In some sensitive cases, Microsoft said, companies that provide A.I. systems should have to know certain information about their customers. To protect consumers from deception, content created by A.I. should be required to carry a special label, the company said.

Mr. Smith said companies should bear the legal “responsibility” for harms associated with A.I. In some cases, he said, the liable party could be the developer of an application like Microsoft’s Bing search engine that uses someone else’s underlying A.I. technology. Cloud companies could be responsible for complying with security regulations and other rules, he added.

“We don’t necessarily have the best information or the best answer, or we may not be the most credible speaker,” Mr. Smith said. “But, you know, right now, especially in Washington D.C., people are looking for ideas.”

Stay connected with us on social media platform for instant update click here to join our  Twitter, & Facebook

We are now on Telegram. Click here to join our channel (@TechiUpdate) and stay updated with the latest Technology headlines.

For all the latest Technology News Click Here 

Read original article here

Denial of responsibility! Rapidtelecast.com is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – [email protected]. The content will be deleted within 24 hours.
Leave a comment