Consumer groups in more than a dozen European countries are calling for urgent government investigations into the risks of generative AI.
The call comes alongside a new report from Norwegian consumer organisation Forbrukerrådet, which concludes that more rules need to be developed.
“We call on safety, data and consumer protection authorities to start investigations now and not wait idly for all kinds of consumer harm to have happened before they take action,” says Ursula Pachl, deputy director general of the European Consumer Organisation (BEUC), Europe’s biggest consumer group.
“These laws apply to all products and services, be they AI-powered or not, and authorities must enforce them.”
The call focuses on several challenges. Certain AI developers, the report points out, have closed off their systems from external scrutiny, making it very hard to understand how data has been collected or decisions are made. Output is often inaccurate, and bias and discrimination are widespread.
Generative AI can also be sued to manipulate or mislead consumers, for example by emulating human speech patterns and using emotive language.
And there are also concerns around privacy and personal integrity with, for example, image generators using datasets taken from search engines or social media without a lawful legal basis or the knowledge of the people concerned.
“Generative AI such as ChatGPT has opened up all kinds of possibilities for consumers, but there are serious concerns about how these systems might deceive, manipulate and harm people,” says Pachl.
“They can also be used to spread disinformation, perpetuate existing biases which amplify discrimination, or be used for fraud.”
The European Data Protection Board has already created a taskforce to look into ChatGPT. Meanwhile, the EU has just approved the AI Act, aimed at tackling issues such as those highlighted in the report.
“It is crucial that the EU makes this law as watertight as possible to protect consumers. All AI systems, including generative AI, need public scrutiny, and public authorities must reassert control over them,” says Pachl.
“Lawmakers must require that the output from any generative AI system is safe, fair and transparent for consumers.”
Meanwhile in the UK, the Information Commissioner’s Office (ICO) this week warned of tougher checks on whether organizations using generative AI are compliant with data protection laws, and calling on them to do more.
“We will be checking whether businesses have tackled privacy risks before introducing generative AI—and taking action where there is risk of harm to people through poor use of their data. There can be no excuse for ignoring risks to people’s rights and freedoms before rollout,” says ICO executive director of regulatory risk Stephen Almond.
“Businesses need to show us how they’ve addressed the risks that occur in their context—even if the underlying technology is the same. An AI-backed chat function helping customers at a cinema raises different question compared with one for a sexual health clinic, for instance.”
In the U.S., progress has been slower. Currently, AI is regulated by a hotch-potch of agencies, including the Securities and Exchange Commission and the Federal Trade Commission. A bill is currently being considered by the Senate that would require government bodies or agencies to inform users when they are interacting with AI.
Stay connected with us on social media platform for instant update click here to join our Twitter, & Facebook
We are now on Telegram. Click here to join our channel (@TechiUpdate) and stay updated with the latest Technology headlines.
For all the latest Technology News Click Here