A pair of cases going before the US supreme court this week could drastically upend the rules of the internet, putting a powerful, decades-old statute in the crosshairs.
At stake is a question that has been foundational to the rise of big tech: should companies be legally responsible for the content their users post? Thus far they have evaded liability, but some US lawmakers and others want to change that. And new lawsuits are bringing the statute before the supreme court for the first time.
Both cases were brought by family members of terrorist attack victims who say social media firms are responsible for stoking violence with their algorithms. The first case, Gonzalez v Google, had its first hearing on 21 February and will ask the highest US court to determine whether YouTube, the Google-owned video website, should be held responsible for recommending Islamic State terrorism videos. The second, which will be heard later this week, targets Twitter and Facebook in addition to Google with similar allegations.
Together they could represent the most pivotal challenge yet to section 230 of the Communications Decency Act, a statute that protects tech companies such as YouTube from being held liable for content that is shared and recommended by its platforms. The stakes are high: a ruling in favor of holding YouTube liable could expose all platforms, big and small, to potential litigation over users’ content.
While lawmakers across the aisle have pushed for reforms to the 27-year-old statute, contending companies should be held accountable for hosting harmful content, some civil liberties organizations as well as tech companies have warned changes to section 230 could irreparably debilitate free-speech protections on the internet.
Here’s what you need to know.
What are the details of the two cases?
Gonzalez v Google centers on whether Google can be held accountable for the content that its algorithms recommend, threatening longstanding protections that online publishers have enjoyed under section 230.
YouTube’s parent company Google is being sued by the family of Nohemi Gonzalez, a 23-year-old US citizen who was studying in Paris in 2015 when she was killed in the coordinated attacks by the Islamic State in and around the French capital. The family seeks to appeal a ruling that maintained that section 230 protects YouTube from being held liable for recommending content that incites or calls for acts of violence. In this case, the content in question was IS recruitment videos.
“The defendants are alleged to have recommended that users view inflammatory videos created by ISIS, videos which played a key role in recruiting fighters to join ISIS in its subjugation of a large area of the Middle East, and to commit terrorist acts in their home countries,” court filings read.
In the case of Twitter v Taameneh, family members of the victim of a 2017 terrorist attack allegedly carried out by IS charged that social media firms are to blame for the rise of extremism. The case targets Google as well as Twitter and Facebook.
What does Section 230 do?
Passed in 1996, section 230 protects companies such as YouTube, Twitter and Facebook from being held legally responsible for content posted by users. Civil liberties groups point out the statute also offers valuable protections for free speech by giving tech platforms the right to host an array of information without undue censorship.
The supreme court is being asked in this case to determine whether the immunity granted by section 230 also extends to platforms when they are not just hosting content but also making “targeted recommendations of information”. The results of the case will be watched closely, said Paul Barrett, deputy director of the NYU Stern Center for Business and Human Rights.
“What’s at stake here are the rules for free expression on the internet,” he said. “This case could help determine whether the major social media platforms continue to provide venues for free expression of all kinds, ranging from political debates to people posting their art and human rights activists telling the world about what’s going wrong in their countries.”
A crackdown on algorithmic recommendations would impact nearly every social media platform. Most steered away from simple chronological feeds after Facebook in 2006 launched its Newsfeed, an algorithmically driven homepage that recommends content to users based on their online activity.
To rein in this technology is to alter the face of the internet itself, Barrett said. “That’s what social media does – it recommends content.”
How have the justices reacted so far?
As arguments in the Gonzalez case began on Tuesday, justices seemed to strike a cautious tone on section 230, saying that changes could trigger a number of lawsuits. Elena Kagan questioned whether its protections were too sweeping, but she indicated the court had more to learn before making a decision.
“You know, these are not like the nine greatest experts on the internet,” Kagan said, referencing herself and the other judges.
Even judges who have historically been tough critics of internet companies seemed hesitant to change section 230 during Tuesday’s arguments, with Clarence Thomas saying it was unclear how YouTube’s algorithm was responsible for abetting terrorism. John Roberts compared video recommendations to a bookseller suggesting books to a customer.
The court will hear arguments on Thursday for the second case regarding tech firms’ responsibility for recommending extremist content.
What is the response to efforts to reform Section 230?
Holding tech companies accountable for their recommendation system has become a rallying cry for both Republican and Democratic lawmakers. Republicans claim that platforms have suppressed conservative viewpoints while Democrats say the platforms’ algorithms are amplifying hate speech and other harmful content.
The debate over section 230 has created a rare consensus across the political spectrum that change must be made, with even Facebook’s Mark Zuckerberg telling Congress that it “may make sense for there to be liability for some of the content”, and that Facebook “would benefit from clearer guidance from elected officials”. Both Joe Biden and his predecessor Donald Trump have called for changes to the measure.
What could go wrong?
Despite lawmakers’ efforts, many companies, academics and human rights advocates have defended section 230, saying that changes to the measure could backfire and significantly alter the internet as we know it.
Firms like Reddit, Twitter, Microsoft as well as tech critics like the Electronic Frontier Foundation have filed letters to the court arguing that making platforms liable for algorithmic recommendations would have grave effects on free speech and internet content.
Evan Greer, a free speech and digital rights activist, says that holding companies accountable for their recommendation systems could “lead to widespread suppression of legitimate political, religious and other speech”.
“Section 230 is widely misunderstood by the general public,” said Greer, who also the director of the digital rights group Fight for the Future. “The truth is that Section 230 is a foundational law for human rights and free expression globally, and more or less the only reason that you can still find crucial information online about controversial topics like abortion, sexual health, military actions, police killings, public figures accused of sexual misconduct, and more.”
Stay connected with us on social media platform for instant update click here to join our Twitter, & Facebook
We are now on Telegram. Click here to join our channel (@TechiUpdate) and stay updated with the latest Technology headlines.
For all the latest Business News Click Here