OpenAI logo displayed on a phone screen. (Photo by Jakub Porzycki/NurPhoto via Getty Images)
A Federal Trade Commission complaint against ChatGPT purveyor OpenAI could slow development of the fast-growing large-language model chatbot that has come to dominate the public conversation over artificial intelligence.
While it’s too early to say whether the complaint will result in any action, it signals that real regulation is on its way. In the United States, Senate Leader Charles Schumer has just announced an ambitious legislative plan to regulate artificial intelligence.
An open letter calling for a six-month pause in the development of powerful AI models has already split the AI research community like a log. But little attention has been paid to the FTC demand that could have real teeth.
Alarmed by the capabilities of OpenAI’s latest large language models, the Center for AI and Digital Policy, a nonprofit organization working to safeguard fundamental rights and democratic institutions in the digital age, filed the complaint just days after the open letter, which itself has garnered more than 20,000 signatures.
The complaint follows Congressional testimony by the Center’s chairman, Merve Hickok, who argued that “we do not have the guardrails in place, the laws that we need, the public education, or the expertise in government to manage the consequences of the rapid changes that are now taking place.”
Marc Rotenberg, founder of the Center and professor at Georgetown University’s law school, said the Federal Trade Commission is the one agency in the US that has the authority and the ability to act.
Marc Rotenberg, president of the Electronic Privacy Information Center participates in a Senate … [+]
Rotenberg is no novice. He is the editor of The AI Policy Sourcebook and a member of the OECD Expert Group on AI. He has had success with FTC complaints before against Facebook and Google.
In 2010, through another nonprofit, the Electronic Privacy Information Center (EPIC), he alleged that the company’s Buzz social networking service violated users’ privacy by automatically sharing their email contacts without their consent. The FTC subsequently investigated the matter and reached a settlement with Google in 2011, under which Google agreed to implement a comprehensive privacy program and to undergo regular privacy audits for 20 years.
Rotenberg also pursued a complaint with the FTC against Facebook, alleging that the company’s practices violated users’ privacy rights. The complaint led to a $5 billion settlement between Facebook and the FTC in 2019, one of the largest fines ever imposed by the agency.
Rotenberg, a privacy law expert and AI policy advocate, believes that AI innovation and fundamental rights protection need not be a trade-off, and that both outcomes can be achieved through a regulatory strategy. The complaint asks the FTC to stop OpenAI from releasing models more powerful than GPT-4 until government safety protocols and regulations are in place.
He notes that the action isn’t ‘anti-OpenAI.’ “When you write a complaint, you have to identify a particular company, and a particular product,” he explained. “But in the course of the remedies, we’re also asking the FTC to undertake a rulemaking for the AI sector, because we would like to see regulations established that are evenly applied.”
The FTC complaint is the sharp end of a broader movement to speed up regulation of AI.
While some argue that OpenAI’s work should be subject to greater scrutiny and oversight, others contend that such actions risk stifling innovation in the rapidly evolving AI field.
In one corner, OpenAI and its supporters remain steadfast in its pursuit of human-level artificial intelligence, striving to develop AI systems that benefit humanity as a whole. The organization has taken steps to address concerns surrounding GPT-4, such as implementing safety mitigations and seeking public input on system behavior and deployment policies.
“We should move quickly and build things and not pause because there’s a lot of good that can be done, and there’s a lot of value that can be created,” said Andrew Ng, a prominent AI researcher in a conversation on the issue with AI pioneer Yann LeCun.
LeCun, chief scientist at Facebook’s AI lab, said he opposed a moratorium on any AI development. “We should continue to do research, and do it responsibly, and that includes considering the ethical implications of our work.”
Pedro Domingos, an AI researcher at the University of Washington and author of “The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World,” is blunter: “I think the moratorium letter was a terrible idea, and this is even worse,” he said of the FTC complaint by email. “Should be laughed out of court.”
In the opposing corner, stands a coalition of like-minded organizations and individuals, resolute in its quest for accountability and transparency. Italy’s data protection agency has taken action, while authorities in Canada, Germany, and Australia are moving forward with their investigations.
Yoshua Bengio on the balcony of the Dresden Opera House in Dresden, Germany. (Photo by Chad … [+]
“I agree with the pause,” said Yoshua Bengio, another pioneer of deep learning, who is a former colleague and long-time collaborator of LeCun’s. “However, I don’t think it should be just OpenAI, and I don’t think it should be only the United States.”
Bengio said he might not have supported a pause a year ago, but that ChatGPT has crossed an important threshold by passing the “Turing test,” a measure of machine intelligence proposed by the British mathematician and computer scientist, Alan Turing, in 1950. A computer system is said to have passed the test if its communications are indistinguishable from those of a human.
“That could be exploited in highly dangerous ways that can threaten democracy,” Bengio warned.
No doubt AI is moving faster than anyone expected, catching governments flat-footed. While the FTC would not comment on the OpenAI complaint, other than to acknowledge that it was looking into the matter, the agency and other regulatory bodies have made it clear that they aren’t waiting for new laws.
Unexpected emergent properties in large AI models are appearing like mushrooms in the damp, and that’s worrisome when the tools are already so widely distributed in society. Nearly every industrial area that touches the public is highly regulated, except for AI.
Lawyers refer to the legal doctrine of the learned intermediary or the notion of professional duty of care, which protects manufacturers and other product developers from liability, as long as the expert who recommends their products has been adequately informed about potential risks and benefits.
But, what happens when the AI is more learned than the learned intermediary? Doctors, lawyers, accountants, engineers – the list goes on – will have to decide, ‘do I override the machine? Or is the machine looking at a trillion data points and it has seen something I just can’t see, and I would be causing harm by overriding it?’
Who is responsible then for erroneous advice? Is it the data? Is it the programmer? Is it the AI as some kind of agent of the manufacturer or the doctor or the lawyer? Or does the doctor or lawyer or whoever stand as the ultimate final gatekeeper?
These questions will be answered in time and the Federal Trade Commission may be the first regulatory body in the United States to weigh in on generative AI. Meanwhile, the Center for AI and Digital Policy will keep the pressure on.
“We are already asking Congress to press the FTC on the status of our complaint.” Rotenberg said.
Stay connected with us on social media platform for instant update click here to join our Twitter, & Facebook
We are now on Telegram. Click here to join our channel (@TechiUpdate) and stay updated with the latest Technology headlines.
For all the latest Technology News Click Here